Get Started in 3 Steps
This guide will have you chatting with AI in minutes using Docker.Prerequisites: Docker and Docker Compose installed on your system.
- Install Docker Desktop (includes Docker Compose)
- Or install Docker Engine and Docker Compose separately
Add API Keys (Optional)
Open
.env in your text editor and add your API keys:Where to get API keys
Where to get API keys
- OpenAI: platform.openai.com/api-keys
- Anthropic: console.anthropic.com
- Google AI Studio: aistudio.google.com
- Azure OpenAI: Requires Azure subscription and deployment setup
You can also set
OPENAI_API_KEY=user_provided to let users enter their own keys in the UI.Using local models (no API keys needed)
Using local models (no API keys needed)
Connect to local AI models instead of cloud providers:Make sure Ollama is running:
ollama serveLaunch LibreChat
Start all services with Docker Compose:This will:Check the status:Wait until you see:
- Pull the latest LibreChat image
- Start MongoDB for data storage
- Start Meilisearch for conversation search
- Start the RAG API for file processing
- Launch LibreChat on port 3080
First launch takes 2-3 minutes to download images and initialize databases. Subsequent starts are much faster.
Access LibreChat
Open your browser and navigate to:Create Your Account
On first visit, you’ll see the registration page.
- Enter your email and password
- Click Sign up
- You’re ready to chat!
Start Chatting
Select an AI model from the dropdown and start your first conversation:
- Click the model selector at the top
- Choose an AI provider (OpenAI, Anthropic, Google, etc.)
- Select a model (GPT-4, Claude 3.5 Sonnet, Gemini Pro)
- Type your message and press Enter
Quick Configuration Tips
Enable additional AI providers
Enable additional AI providers
Edit your Restart to apply:
.env file to add more providers:Allow user registration
Allow user registration
By default, only the first user can register. To allow more users:Or invite specific users:
Change the port
Change the port
If port 3080 is already in use:Then in Restart:
docker-compose.yml, update the ports mapping:docker compose up -dEnable web search
Enable web search
Add web search capability to your chats:Restart and you’ll see a 🔍 Search toggle in the interface.
Set up code interpreter
Set up code interpreter
Enable secure code execution:Get your API key at code.librechat.aiNow agents can execute Python, JavaScript, and more!
Common Docker Commands
Troubleshooting
Can't access http://localhost:3080
Can't access http://localhost:3080
- Check if containers are running:
- View logs for errors:
- Ensure port 3080 isn’t already in use:
API key errors
API key errors
If you see “Invalid API key” errors:
- Verify your
.envfile has the correct format:
- Restart after changing
.env:
- Check if the key is valid:
- Test OpenAI keys at platform.openai.com/api-keys
- Test Anthropic keys at console.anthropic.com
Database connection errors
Database connection errors
If MongoDB fails to start:
Out of memory errors
Out of memory errors
If you see memory errors during build:
- Increase Docker memory limit (Docker Desktop → Settings → Resources)
- Or reduce memory usage:
Next Steps
You’re now running LibreChat! Here’s what to explore next:Detailed Installation
Learn about local development setup and advanced configuration
Configure AI Endpoints
Set up all supported AI providers and custom models
User Guide
Master LibreChat features and workflows
Create Agents
Build custom AI assistants with tools and capabilities
Enable MCP
Add Model Context Protocol servers for extended functionality
Production Deployment
Deploy LibreChat for production use
Need help? Join our Discord community with 50,000+ users, or check the GitHub discussions.