How to Get an LLM Token and Use It in JupyterHub

This guide explains how to request an LLM API token from NRP and configure it in JupyterHub to start using the built-in chat interface with Large Language Models.

1. Log in to NRP AI

  1. Go to https://nrp.ai
  2. Log in using your account credentials.

2. Request Access to LLM Tokens

  1. Navigate to the token page: https://nrp.ai/llmtoken/
  2. If you do not yet have permission, submit a request using the access request form.

👉 Request access here

After your request is approved, you will be able to generate a token.

3. Generate an LLM Token

  1. Open the LLM Token page.
  2. Input Alias and choose a group "nrp/fullerton/csuf-test-llm"
  3. Click Create new token for general LLM API access or Create new token and generate the Chatbox configuration.
NRP LLM generation page

The system will generate a token and return it in Text or JSON format.

Save this information securely, you will need it when configuring JupyterHub.

4. Open JupyterHub

  1. Log in to your JupyterHub environment.
  2. Open a notebook server.

5. Configure the LLM Settings

  1. Click the Chat button in the JupyterHub interface.
  2. Open Settings in the chat panel.
JupyterHub's LLM settings menu

Configure the following fields:

Completion ModelSelect "OpenAI (General Interface)"
Model IDChoose one of the available models
Base API URLInput https://ellm.nrp-nautilus.io/v1
API KeyPaste your generated token
LLM settings

6. Start Using the LLM

After saving your settings:

  1. Open the Chat panel.
  2. Enter your prompt.
  3. The LLM will respond directly within JupyterHub.

You can now use the LLM to help with coding, debugging, data analysis, and other tasks within your notebook environment.

Start using LLM chat