Environment Variables
CEMS server is configured via environment variables. These are typically set in a.env file or passed directly to Docker Compose.
Required Variables
OPENROUTER_API_KEY
OpenRouter API key for LLM and embedding operations.Get your key at: openrouter.ai/keysUsed for:
- Embeddings (
openai/text-embedding-3-small) - LLM operations (query synthesis, HyDE, maintenance)
- All AI functionality
CEMS_ADMIN_KEY
Admin API key for user and team management.Generate with:Used for:
- Creating users (
POST /admin/users) - Resetting API keys (
POST /admin/users/{id}/reset-key) - Managing teams (
POST /admin/teams) - All
/admin/*endpoints
Database Configuration
CEMS_DATABASE_URL
PostgreSQL connection URL.Format:Docker Compose default:
In Docker Compose, this is automatically set to use the
postgres service.POSTGRES_PASSWORD
PostgreSQL database password.Change this in production!
Server Configuration
CEMS_MODE
Operating mode for CEMS.Options:
server- Multi-user server mode (Docker deployment)client- Client mode (connects to remote server)
CEMS_SERVER_HOST
Host to bind the server to.
0.0.0.0- Listen on all interfaces (Docker default)127.0.0.1- Localhost only
CEMS_SERVER_PORT
Port for the REST API server.
Embedding Configuration
CEMS_EMBEDDING_BACKEND
Embedding provider backend.Options:
openrouter- OpenRouter API (1536-dim, default)llamacpp_server- Local llama.cpp server (768-dim)
openrouter for best performance.CEMS_EMBEDDING_DIMENSION
Embedding vector dimension.Must match your backend:
1536- OpenRouter (openai/text-embedding-3-small)768- llama.cpp (embeddinggemma-300M-Q8_0.gguf)
CEMS_EMBEDDING_MODEL
Embedding model (OpenRouter format: Other options:
provider/model).Default:openai/text-embedding-3-large(3072-dim, higher quality)text-embedding-3-small(if using OpenAI directly)
LLM Configuration
CEMS_LLM_MODEL
LLM model for maintenance operations and query synthesis.Default:Other options:
openai/gpt-4o(higher quality, slower)anthropic/claude-3.5-sonnet(Anthropic via OpenRouter)x-ai/grok-4.1-fast(fast alternative)
Retrieval Configuration
CEMS_RERANKER_BACKEND
Reranker backend for search result refinement.Options:
disabled- No reranking (default, best performance)llm- OpenRouter LLM rerankerllamacpp_server- Local llama.cpp reranker
Advanced Settings
These settings have sensible defaults and typically don’t need to be changed.Debug Mode
Enable debug mode (exceptions bubble up instead of silent fallbacks).
Retrieval Settings
Minimum similarity score to include in results.
Default token budget for retrieval results.
Max candidates per vector search query.
Scheduler Settings
Enable background maintenance jobs.
Hour for nightly consolidation (0-23, UTC).
Day for weekly summarization.
Decay Settings
Days before memory is considered stale.
Days before memory is archived.
Cosine similarity threshold for duplicate detection.
Example .env File
Here’s a complete example for production:Verifying Configuration
Use the admin debug endpoint to verify your configuration:Testing LLM Connectivity
Test your OpenRouter API key:Next Steps
User Management
Create users and distribute API keys