Overview
LiteLLM supports configuration via environment variables for API keys, endpoints, and settings.
Provider API Keys
OpenAI
OpenAI API key.export OPENAI_API_KEY="sk-..."
Azure OpenAI
Azure OpenAI endpoint.export AZURE_API_BASE="https://your-endpoint.openai.azure.com/"
Azure OpenAI API version.export AZURE_API_VERSION="2024-02-01"
Anthropic
Anthropic API key for Claude models.export ANTHROPIC_API_KEY="sk-ant-..."
Google (Vertex AI / AI Studio)
GOOGLE_APPLICATION_CREDENTIALS
Path to Google Cloud service account JSON file.export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
Vertex AI location (e.g., us-central1).
Google AI Studio API key.
AWS Bedrock
AWS region (e.g., us-east-1).
Cohere
Cohere API key.export COHERE_API_KEY="..."
Hugging Face
Replicate
Together AI
Groq
Mistral
Perplexity
Anyscale
Voyage AI
Voyage AI API key (embeddings).
Databricks
Databricks workspace URL.
Cloudflare
Cloudflare Workers AI API key.
LiteLLM Settings
Core Settings
Log level.Options: DEBUG, INFO, WARNING, ERRORexport LITELLM_LOG="DEBUG"
LITELLM_LOCAL_MODEL_COST_MAP
Use local cost map for pricing.
Drop unsupported parameters instead of raising errors.export LITELLM_DROP_PARAMS="true"
Caching
Redis hostname for caching.
Complete Redis connection URL.export REDIS_URL="redis://:password@localhost:6379"
Proxy Server Settings
Authentication
Master admin key for proxy.export LITELLM_MASTER_KEY="sk-1234"
Salt key for hashing API keys.
Database
PostgreSQL connection string.export DATABASE_URL="postgresql://user:pass@localhost:5432/litellm"
Store model configurations in database.export STORE_MODEL_IN_DB="true"
Server Configuration
Number of Uvicorn workers.
Path to SSL certificate file.
Observability
Langfuse public key for logging.
Sentry DSN for error tracking.
Slack webhook for alerts.
Budget & Rate Limiting
Global budget limit in USD.export LITELLM_MAX_BUDGET="1000.0"
Budget reset period.export LITELLM_BUDGET_DURATION="30d"
Usage Examples
Basic Setup
# OpenAI
export OPENAI_API_KEY="sk-..."
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# Start LiteLLM
python your_app.py
Azure OpenAI Setup
export AZURE_API_KEY="your-key"
export AZURE_API_BASE="https://your-endpoint.openai.azure.com/"
export AZURE_API_VERSION="2024-02-01"
python your_app.py
Proxy with Database
export LITELLM_MASTER_KEY="sk-1234"
export DATABASE_URL="postgresql://user:pass@localhost:5432/litellm"
export STORE_MODEL_IN_DB="true"
litellm --config config.yaml
Full Production Setup
# API Keys
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export AZURE_API_KEY="your-azure-key"
export AZURE_API_BASE="https://your-endpoint.openai.azure.com/"
# Proxy Settings
export LITELLM_MASTER_KEY="sk-1234"
export DATABASE_URL="postgresql://user:pass@localhost:5432/litellm"
export PORT="4000"
export NUM_WORKERS="4"
# Caching
export REDIS_URL="redis://:password@localhost:6379"
# Observability
export LANGFUSE_PUBLIC_KEY="pk-..."
export LANGFUSE_SECRET_KEY="sk-..."
export SENTRY_DSN="https://[email protected]/..."
# Budget
export LITELLM_MAX_BUDGET="10000.0"
export LITELLM_BUDGET_DURATION="30d"
# Logging
export LITELLM_LOG="INFO"
# Start proxy
litellm --config config.yaml
Docker Environment
docker-compose.yml
version: '3.8'
services:
litellm:
image: ghcr.io/berriai/litellm:main-latest
ports:
- "4000:4000"
environment:
# API Keys
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
# Proxy Settings
- LITELLM_MASTER_KEY=${LITELLM_MASTER_KEY}
- DATABASE_URL=postgresql://litellm:password@postgres:5432/litellm
# Redis
- REDIS_HOST=redis
- REDIS_PORT=6379
# Observability
- LANGFUSE_PUBLIC_KEY=${LANGFUSE_PUBLIC_KEY}
- LANGFUSE_SECRET_KEY=${LANGFUSE_SECRET_KEY}
volumes:
- ./config.yaml:/app/config.yaml
command: ["--config", "/app/config.yaml", "--port", "4000"]
depends_on:
- postgres
- redis
postgres:
image: postgres:15
environment:
- POSTGRES_DB=litellm
- POSTGRES_USER=litellm
- POSTGRES_PASSWORD=password
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
.env file
# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
LITELLM_MASTER_KEY=sk-1234
LANGFUSE_PUBLIC_KEY=pk-...
LANGFUSE_SECRET_KEY=sk-...
Kubernetes ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: litellm-config
data:
LITELLM_LOG: "INFO"
PORT: "4000"
HOST: "0.0.0.0"
NUM_WORKERS: "4"
STORE_MODEL_IN_DB: "true"
---
apiVersion: v1
kind: Secret
metadata:
name: litellm-secrets
type: Opaque
stringData:
OPENAI_API_KEY: "sk-..."
ANTHROPIC_API_KEY: "sk-ant-..."
LITELLM_MASTER_KEY: "sk-1234"
DATABASE_URL: "postgresql://..."
REDIS_URL: "redis://..."
Loading from .env File
from dotenv import load_dotenv
import litellm
# Load environment variables from .env file
load_dotenv()
# Now use LiteLLM
response = litellm.completion(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)
Provider-Specific Variables
Some providers require additional environment variables:
AWS Bedrock with Profile
export AWS_PROFILE="my-profile"
export AWS_REGION_NAME="us-east-1"
Vertex AI with ADC
# Use Application Default Credentials
gcloud auth application-default login
export VERTEXAI_PROJECT="my-project"
export VERTEXAI_LOCATION="us-central1"
Azure with Managed Identity
export AZURE_USE_MANAGED_IDENTITY="true"
export AZURE_API_BASE="https://your-endpoint.openai.azure.com/"
Security Best Practices
- Never commit .env files to version control
- Use secret managers in production (AWS Secrets Manager, Kubernetes Secrets, etc.)
- Rotate keys regularly
- Use least-privilege access for cloud IAM roles
- Encrypt sensitive environment variables in CI/CD