LibreChat allows you to add custom AI endpoints beyond the built-in providers. Configure them in librechat.yaml under the endpoints.custom array.
Basic Structure
Each custom endpoint requires a unique name, API key, base URL, and models configuration.
endpoints:
custom:
- name: 'endpoint-name'
apiKey: '${ENV_VARIABLE}'
baseURL: 'https://api.provider.com/v1'
models:
default: ['model-1', 'model-2']
fetch: false
Configuration Fields
Unique name for the endpoint (displayed in the UI)
API key for authentication (use environment variables)apiKey: '${GROQ_API_KEY}'
Base URL for the API endpointbaseURL: 'https://api.groq.com/openai/v1/'
Models Configuration
List of default models to use (at least one required)models:
default:
- 'llama3-70b-8192'
- 'mixtral-8x7b-32768'
Fetch available models from APIWhen true, queries the API /models endpoint
Optional Settings
Enable automatic conversation title generation
titleModel
string
default:"gpt-3.5-turbo"
Model to use for generating titlestitleModel: 'mixtral-8x7b-32768'
titleMethod
string
default:"completion"
Method for title generation: completion or functions
Enable conversation summarization
summaryModel
string
default:"gpt-3.5-turbo"
Model to use for summarization
Label displayed for AI messagesmodelDisplayLabel: 'Groq'
Custom icon URL for the endpointiconURL: https://example.com/icon.png
Custom headers to include with requestsheaders:
x-portkey-api-key: '${PORTKEY_API_KEY}'
x-portkey-virtual-key: '${PORTKEY_OPENAI_VIRTUAL_KEY}'
Special variables:
{{LIBRECHAT_BODY_PARENTMESSAGEID}} - Insert parent message ID
Request Parameters
Add or override default request parametersaddParams:
safe_prompt: true
temperature: 0.7
Remove default parameters from requestsdropParams: ['stop', 'user', 'frequency_penalty', 'presence_penalty']
Required for some providers like Mistral to avoid 422 errors
Example: Groq
endpoints:
custom:
- name: 'groq'
apiKey: '${GROQ_API_KEY}'
baseURL: 'https://api.groq.com/openai/v1/'
models:
default:
- 'llama3-70b-8192'
- 'llama3-8b-8192'
- 'mixtral-8x7b-32768'
- 'gemma-7b-it'
fetch: false
titleConvo: true
titleModel: 'mixtral-8x7b-32768'
modelDisplayLabel: 'groq'
Example: Mistral AI
endpoints:
custom:
- name: 'Mistral'
apiKey: '${MISTRAL_API_KEY}'
baseURL: 'https://api.mistral.ai/v1'
models:
default: ['mistral-tiny', 'mistral-small', 'mistral-medium']
fetch: true
titleConvo: true
titleModel: 'mistral-tiny'
modelDisplayLabel: 'Mistral'
dropParams: ['stop', 'user', 'frequency_penalty', 'presence_penalty']
Why drop parameters for Mistral?
Mistral AI’s API doesn’t support some OpenAI-standard parameters. Dropping them prevents 422 validation errors.
Example: OpenRouter
endpoints:
custom:
- name: 'OpenRouter'
apiKey: '${OPENROUTER_KEY}'
baseURL: 'https://openrouter.ai/api/v1'
headers:
x-librechat-body-parentmessageid: '{{LIBRECHAT_BODY_PARENTMESSAGEID}}'
models:
default: ['meta-llama/llama-3-70b-instruct']
fetch: true
titleConvo: true
titleModel: 'meta-llama/llama-3-70b-instruct'
dropParams: ['stop']
modelDisplayLabel: 'OpenRouter'
Example: Helicone (AI Gateway)
endpoints:
custom:
- name: 'Helicone'
apiKey: '${HELICONE_KEY}'
baseURL: 'https://ai-gateway.helicone.ai'
headers:
x-librechat-body-parentmessageid: '{{LIBRECHAT_BODY_PARENTMESSAGEID}}'
models:
default:
- 'gpt-4o-mini'
- 'claude-4.5-sonnet'
- 'llama-3.1-8b-instruct'
- 'gemini-2.5-flash-lite'
fetch: true
titleConvo: true
titleModel: 'gpt-4o-mini'
modelDisplayLabel: 'Helicone'
iconURL: https://marketing-assets-helicone.s3.us-west-2.amazonaws.com/helicone.png
Example: Portkey AI
endpoints:
custom:
- name: 'Portkey'
apiKey: 'dummy'
baseURL: 'https://api.portkey.ai/v1'
headers:
x-portkey-api-key: '${PORTKEY_API_KEY}'
x-portkey-virtual-key: '${PORTKEY_OPENAI_VIRTUAL_KEY}'
models:
default: ['gpt-4o-mini', 'gpt-4o', 'chatgpt-4o-latest']
fetch: true
titleConvo: true
titleModel: 'current_model'
summarize: false
summaryModel: 'current_model'
modelDisplayLabel: 'Portkey'
iconURL: https://images.crunchbase.com/image/upload/c_pad,f_auto,q_auto:eco,dpr_1/rjqy7ghvjoiu4cd1xjbf
current_model uses the currently selected model for titles/summaries
Azure OpenAI
While Azure OpenAI has a dedicated endpoint, you can also configure it as a custom endpoint:
endpoints:
azureOpenAI:
# Azure-specific configuration here
Refer to the official Azure OpenAI documentation for detailed setup.
Model Specifications
Create preset configurations that can be grouped in the UI:
modelSpecs:
list:
# Nested under endpoint
- name: "gpt-4o"
label: "GPT-4 Optimized"
description: "Most capable GPT-4 model"
group: "openAI"
preset:
endpoint: "openAI"
model: "gpt-4o"
# Custom group with icon
- name: "coding-assistant"
label: "Coding Assistant"
description: "Specialized for coding tasks"
group: "my-assistants"
groupIcon: "https://example.com/icons/assistants.png"
preset:
endpoint: "openAI"
model: "gpt-4o"
instructions: "You are an expert coding assistant..."
temperature: 0.3
# Standalone (no group)
- name: "general-assistant"
label: "General Assistant"
description: "General purpose assistant"
preset:
endpoint: "openAI"
model: "gpt-4o-mini"
Organize specs in UI:
- Match endpoint name to nest under that endpoint
- Custom name creates separate collapsible section
- Omit for standalone top-level item
Icon for custom groups (URL or built-in endpoint key)Only needs to be set on one spec per group
Environment Variables
Set the following environment variables for your custom endpoints:
# Groq
GROQ_API_KEY=your_groq_key
# Mistral AI
MISTRAL_API_KEY=your_mistral_key
# OpenRouter
OPENROUTER_KEY=your_openrouter_key
# Other providers
ANYSCALE_API_KEY=
COHERE_API_KEY=
DEEPSEEK_API_KEY=
FIREWORKS_API_KEY=
HUGGINGFACE_TOKEN=
PERPLEXITY_API_KEY=
TOGETHERAI_API_KEY=
Troubleshooting
Some providers don’t support all OpenAI parameters. Use dropParams to remove unsupported fields:dropParams: ['stop', 'user', 'frequency_penalty', 'presence_penalty']
- Check
models.default has at least one model
- Verify
models.fetch: true if querying API
- Ensure API key is valid and has access
Custom Headers Not Working