How to Get an LLM Token and Use It in JupyterHub
This guide explains how to request an LLM API token from NRP and configure it in JupyterHub to start using the built-in chat interface with Large Language Models.
1. Log in to NRP AI
- Go to https://nrp.ai
- Log in using your account credentials.
2. Request Access to LLM Tokens
- Navigate to the token page: https://nrp.ai/llmtoken/
- If you do not yet have permission, submit a request using the access request form.
👉 Request access here
After your request is approved, you will be able to generate a token.
3. Generate an LLM Token
- Open the LLM Token page.
- Input Alias and choose a group "nrp/fullerton/csuf-test-llm"
- Click Create new token for general LLM API access or Create new token and generate the Chatbox configuration.
The system will generate a token and return it in Text or JSON format.
Save this information securely, you will need it when configuring JupyterHub.
4. Open JupyterHub
- Log in to your JupyterHub environment.
- Open a notebook server.
5. Configure the LLM Settings
- Click the Chat button in the JupyterHub interface.
- Open Settings in the chat panel.
Configure the following fields:
| Completion Model | Select "OpenAI (General Interface)" |
| Model ID | Choose one of the available models |
| Base API URL | Input https://ellm.nrp-nautilus.io/v1 |
| API Key | Paste your generated token |
6. Start Using the LLM
After saving your settings:
- Open the Chat panel.
- Enter your prompt.
- The LLM will respond directly within JupyterHub.
You can now use the LLM to help with coding, debugging, data analysis, and other tasks within your notebook environment.



