Quick Deploy
LiteLLM can be deployed to Render with a single click:
Click Deploy Button
Click the “Deploy to Render” button above to start the deployment process.
Connect GitHub
Authorize Render to access the LiteLLM repository (or fork it to your account).
Configure Environment
Set required environment variables:
LITELLM_MASTER_KEY - Master key for authentication
OPENAI_API_KEY - Your OpenAI API key (if using OpenAI)
ANTHROPIC_API_KEY - Your Anthropic API key (if using Anthropic)
Additional provider keys as needed
Deploy
Render will automatically:
Build the Docker image
Provision a PostgreSQL database
Deploy the service with SSL
Provide a public URL
Manual Render Deployment
Create New Web Service
Create Service
Go to Render Dashboard
Click New + → Web Service
Connect your Git repository or use https://github.com/BerriAI/litellm
Configure Service
Basic Settings:
Name: litellm-proxy
Region: Choose closest to your users
Branch: main
Runtime: Docker
Dockerfile Path: ./Dockerfile
Set Instance Type
Recommended tiers:
Development: Starter ($7/month)
Production: Standard (25 / m o n t h ) o r P r o ( 25/month) or Pro ( 25/ m o n t h ) or P ro ( 85/month)
Enterprise: Pro Plus ($250/month)
Resource recommendations: Development : 512MB RAM, 0.5 CPU
Production : 2GB RAM, 1 CPU
High Traffic : 4GB RAM, 2 CPU
Configure Environment
Add environment variables (see configuration section below)
Database Setup
Create PostgreSQL Database
Create Database
From Render Dashboard, click New + → PostgreSQL
Choose same region as web service
Select database plan:
Free: 90-day trial (1GB storage)
Starter: $7/month (10GB storage)
Standard: $20/month (50GB storage)
Get Connection String
After creation, copy the Internal Database URL :
Add to Web Service
In your web service environment variables:
Use the Internal Database URL (not External) for better performance and security within Render.
Environment Configuration
Required Variables
# Authentication
LITELLM_MASTER_KEY = sk-1234 # Change this!
# Database (use Render's PostgreSQL internal URL)
DATABASE_URL = postgresql://user:pass@host/db
STORE_MODEL_IN_DB = True
Provider API Keys
OpenAI
Anthropic
Azure OpenAI
AWS Bedrock
OPENAI_API_KEY = sk-proj-...
ANTHROPIC_API_KEY = sk-ant-...
AZURE_API_KEY = your-key
AZURE_API_BASE = https://your-resource.openai.azure.com
AZURE_API_VERSION = 2024-02-15-preview
AWS_ACCESS_KEY_ID = AKIA...
AWS_SECRET_ACCESS_KEY = your-secret
AWS_REGION_NAME = us-east-1
Optional Configuration
# Redis for caching (use Render Redis)
REDIS_HOST = red-xxx.oregon-redis.render.com
REDIS_PORT = 6379
REDIS_PASSWORD = your-redis-password
# Observability
LANGFUSE_PUBLIC_KEY = pk-...
LANGFUSE_SECRET_KEY = sk-...
LANGFUSE_HOST = https://cloud.langfuse.com
# Debug logging
LITELLM_LOG = DEBUG
DETAILED_DEBUG = True
Configuration File
Using config.yaml
Create a config.yaml file in your repository:
model_list :
- model_name : gpt-4o
litellm_params :
model : gpt-4o
api_key : os.environ/OPENAI_API_KEY
- model_name : claude-sonnet-4
litellm_params :
model : anthropic/claude-sonnet-4-20250514
api_key : os.environ/ANTHROPIC_API_KEY
- model_name : gpt-4o-azure
litellm_params :
model : azure/gpt-4o
api_key : os.environ/AZURE_API_KEY
api_base : os.environ/AZURE_API_BASE
api_version : "2024-02-15-preview"
general_settings :
master_key : os.environ/LITELLM_MASTER_KEY
database_url : os.environ/DATABASE_URL
# Enable UI
ui : true
# Rate limiting
max_parallel_requests : 100
# Caching
cache : true
cache_params :
type : redis
router_settings :
routing_strategy : latency-based-routing
allowed_fails : 3
cooldown_time : 30
Update Docker command in render.yaml:
services :
- type : web
name : litellm
runtime : docker
dockerCommand : litellm --config /app/config.yaml --port 4000
Custom Domain
Add Custom Domain
In service settings, go to Custom Domains
Click Add Custom Domain
Enter your domain: api.yourdomain.com
Configure DNS
Add a CNAME record in your DNS provider: Type: CNAME
Name: api
Value: litellm-proxy.onrender.com
TTL: Auto
Wait for Verification
Render will automatically provision SSL certificate via Let’s Encrypt.
This takes 5-10 minutes.
Render provides free SSL certificates for all custom domains automatically.
Redis for Caching
Create Redis Instance
Create Redis
Click New + → Redis
Name: litellm-cache
Plan: Free (25MB) or Starter ($10/month, 256MB)
Region: Same as web service
Get Connection Details
Copy from Redis dashboard:
Internal Redis URL: redis://red-xxx:6379
Or individual fields: Host, Port, Password
Configure LiteLLM
Add to web service environment: REDIS_HOST = red-xxx.oregon-redis.render.com
REDIS_PORT = 6379
REDIS_PASSWORD = your-password
Or use the full URL:
Deployment Strategies
Auto-Deploy from Git
Render automatically deploys when you push to your branch:
git add .
git commit -m "Update LiteLLM config"
git push origin main
Render will:
Detect the push
Build new Docker image
Run database migrations
Deploy with zero-downtime
Manual Deploy
Trigger manual deployment from dashboard:
Go to your service
Click Manual Deploy → Deploy latest commit
Or Clear build cache & deploy for clean build
Blueprint (render.yaml)
Define infrastructure as code:
services :
- type : web
name : litellm-proxy
runtime : docker
repo : https://github.com/BerriAI/litellm
region : oregon
plan : standard
branch : main
dockerCommand : litellm --port 4000
envVars :
- key : LITELLM_MASTER_KEY
generateValue : true
- key : DATABASE_URL
fromDatabase :
name : litellm-db
property : connectionString
- key : OPENAI_API_KEY
sync : false # Set manually
- key : STORE_MODEL_IN_DB
value : "True"
healthCheckPath : /health/liveliness
databases :
- name : litellm-db
plan : starter
region : oregon
databaseName : litellm
user : litellm
Deploy blueprint:
Monitoring and Logs
View Logs
Go to service dashboard
Click Logs tab
View real-time logs or search history
# Filter logs
# In Render UI, use search box:
ERROR
DEBUG
/chat/completions
Health Checks
Render automatically monitors your service using the health check endpoint:
healthCheckPath : /health/liveliness
healthCheckTimeout : 10
healthCheckInterval : 30
View health status in service dashboard.
Metrics
Render provides built-in metrics:
CPU usage
Memory usage
Request count
Response time
Error rate
Access via Metrics tab in service dashboard.
Scaling
Horizontal Scaling
Enable Autoscaling
Go to service Settings
Scroll to Scaling
Enable Autoscaling
Configure Limits
Min instances : 1
Max instances : 5
Set Triggers
CPU threshold: 70%
Memory threshold: 80%
Scale up delay: 2 minutes
Scale down delay: 10 minutes
Vertical Scaling
Upgrade instance type in Settings → Plan :
Starter: 512MB RAM, 0.5 CPU
Standard: 2GB RAM, 1 CPU
Pro: 4GB RAM, 2 CPU
Pro Plus: 8GB RAM, 4 CPU
Troubleshooting
Build Failures
# Check build logs in Render dashboard
# Common issues:
# 1. Docker build timeout
# Solution: Enable "Docker Layer Caching" in settings
# 2. Out of memory during build
# Solution: Upgrade to Standard plan or higher
# 3. npm/pip install failures
# Solution: Clear build cache and redeploy
Service Won’t Start
# Check logs for errors
# Missing environment variable
ERROR: LITELLM_MASTER_KEY not set
# Solution: Add in environment settings
# Database connection failed
ERROR: Could not connect to database
# Solution: Verify DATABASE_URL is correct
# Port binding error
ERROR: Address already in use
# Solution: Don't set PORT env var (Render sets automatically)
Database Connection Issues
# Use internal URL, not external
# ✅ Correct:
DATABASE_URL = postgresql://user:[email protected] /db
# ❌ Wrong (external URL, slower):
DATABASE_URL = postgresql://user:[email protected] /db
# Test connection from web service shell
psql $DATABASE_URL
# 1. Check if database and service are in same region
# 2. Enable Redis caching
# 3. Upgrade to higher instance plan
# 4. Enable autoscaling
# 5. Use Render's load balancer for multiple instances
Cost Optimization
Free Tier Setup
Web Service : Free (with limits)
PostgreSQL : Free 90-day trial
Redis : Free (25MB)
Total : $0 for 90 days
After trial : ~$7/month minimum
Production Setup
Web Service : Standard ($25/month)
PostgreSQL : Starter ($7/month)
Redis : Starter ($10/month)
Total : $42/month
High-Traffic Setup
Web Service : Pro ($85/month) with autoscaling
PostgreSQL : Standard ($20/month)
Redis : Standard ($35/month)
Total : ~$140/month base + autoscaling
Render provides $5/month credit for students and open-source projects.
Feature Render Railway Fly.io Free tier ✅ Limited ✅ $5 credit ✅ Limited Managed PostgreSQL ✅ ✅ ❌ Auto SSL ✅ ✅ ✅ Autoscaling ✅ ❌ ✅ Docker support ✅ ✅ ✅ Zero-downtime deploys ✅ ✅ ✅ Built-in monitoring ✅ ✅ ✅
Next Steps
Railway Alternative PaaS deployment
Monitoring Add observability and alerts
Security Secure your deployment
Performance Optimize for production