Skip to main content
better-openclaw supports multiple AI providers for language models. Configure providers when generating your stack or add them later via environment variables.

Selecting providers

Specify AI providers during stack generation:
npx create-better-openclaw
# Select AI providers from the list

Supported providers

ProviderAPI Key RequiredLocal/RemoteEnvironment Variable
OpenAIYesRemoteOPENAI_API_KEY
AnthropicYesRemoteANTHROPIC_API_KEY
Google (Gemini)YesRemoteGOOGLE_API_KEY
xAI (Grok)YesRemoteXAI_API_KEY
DeepSeekYesRemoteDEEPSEEK_API_KEY
GroqYesRemoteGROQ_API_KEY
OpenRouterYesRemoteOPENROUTER_API_KEY
MistralYesRemoteMISTRAL_API_KEY
Together AIYesRemoteTOGETHER_API_KEY
OllamaNoLocal-
LM StudioNoLocal-
vLLMNoLocal-

Environment variables

When you select AI providers, better-openclaw adds the required environment variables to your .env file:
.env
# ═══════════════════════════════════════════════════════════════════════════════
# AI Provider API Keys
# ═══════════════════════════════════════════════════════════════════════════════

# API Key for openai AI models
# Service: OpenClaw Core | Required: Yes | Secret: Yes
OPENAI_API_KEY=

# API Key for anthropic AI models
# Service: OpenClaw Core | Required: Yes | Secret: Yes
ANTHROPIC_API_KEY=

# API Key for google AI models
# Service: OpenClaw Core | Required: Yes | Secret: Yes
GOOGLE_API_KEY=

# API Key for xai AI models
# Service: OpenClaw Core | Required: Yes | Secret: Yes
XAI_API_KEY=

# API Key for deepseek AI models
# Service: OpenClaw Core | Required: Yes | Secret: Yes
DEEPSEEK_API_KEY=

# API Key for groq AI models
# Service: OpenClaw Core | Required: Yes | Secret: Yes
GROQ_API_KEY=

# API Key for openrouter AI models
# Service: OpenClaw Core | Required: Yes | Secret: Yes
OPENROUTER_API_KEY=

# API Key for mistral AI models
# Service: OpenClaw Core | Required: Yes | Secret: Yes
MISTRAL_API_KEY=

# API Key for together AI models
# Service: OpenClaw Core | Required: Yes | Secret: Yes
TOGETHER_API_KEY=

Obtaining API keys

OpenAI

  1. Create an account at platform.openai.com
  2. Navigate to API keys in settings
  3. Create a new API key
  4. Add billing information
.env
OPENAI_API_KEY=sk-proj-...

Anthropic (Claude)

  1. Sign up at console.anthropic.com
  2. Go to API Keys
  3. Create a new key
  4. Add credits or payment method
.env
ANTHROPIC_API_KEY=sk-ant-...

Google (Gemini)

  1. Visit aistudio.google.com
  2. Click “Get API Key”
  3. Create or select a Google Cloud project
  4. Copy the generated key
.env
GOOGLE_API_KEY=AIza...

xAI (Grok)

  1. Sign up at x.ai/api
  2. Access the console
  3. Generate an API key
.env
XAI_API_KEY=xai-...

DeepSeek

  1. Create account at platform.deepseek.com
  2. Navigate to API Keys
  3. Create new key
.env
DEEPSEEK_API_KEY=sk-...

Groq

  1. Sign up at console.groq.com
  2. Go to API Keys
  3. Create new key
.env
GROQ_API_KEY=gsk_...

OpenRouter

  1. Register at openrouter.ai
  2. Add credits
  3. Generate API key in Keys section
.env
OPENROUTER_API_KEY=sk-or-v1-...

Mistral

  1. Sign up at console.mistral.ai
  2. Create API key
  3. Add payment method
.env
MISTRAL_API_KEY=...

Together AI

  1. Create account at together.ai
  2. Navigate to Settings → API Keys
  3. Generate new key
.env
TOGETHER_API_KEY=...

Local AI providers

Ollama

Run large language models locally without API keys.
npx create-better-openclaw --services ollama --yes
Ollama configuration in .env:
.env
# Ollama hostname for OpenClaw
# Service: Ollama | Required: Yes | Secret: No
OLLAMA_HOST=ollama

# Ollama API port for OpenClaw
# Service: Ollama | Required: Yes | Secret: No
OLLAMA_PORT=11434
Pull models after stack starts:
# Enter Ollama container
docker compose exec ollama bash

# Pull models
ollama pull llama3.2
ollama pull mistral
ollama pull codellama

# List installed models
ollama list

LM Studio

LM Studio runs on your host machine. Configure OpenClaw to connect via host.docker.internal:
.env
# LM Studio runs on host at default port 1234
LMSTUDIO_HOST=host.docker.internal
LMSTUDIO_PORT=1234
Start LM Studio on your host and load a model before starting the OpenClaw stack.

vLLM

Self-hosted LLM inference server. Add as a service:
npx create-better-openclaw --services vllm --yes

Claude web provider (optional)

For using Claude via web session instead of API:
.env
# ═══════════════════════════════════════════════════════════════════════════════
# Claude Web Provider (optional)
# ═══════════════════════════════════════════════════════════════════════════════

# Claude AI session key for web provider authentication
# Service: OpenClaw Core | Required: No | Secret: Yes
CLAUDE_AI_SESSION_KEY=

# Claude web session key for web provider authentication
# Service: OpenClaw Core | Required: No | Secret: Yes
CLAUDE_WEB_SESSION_KEY=

# Claude web cookie for web provider authentication
# Service: OpenClaw Core | Required: No | Secret: Yes
CLAUDE_WEB_COOKIE=
Web provider authentication tokens expire. Use the official Anthropic API for production deployments.

Gateway configuration

All AI provider API keys are automatically injected into the OpenClaw gateway environment:
docker-compose.yml
services:
  openclaw-gateway:
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}
      ANTHROPIC_API_KEY: ${ANTHROPIC_API_KEY}
      GOOGLE_API_KEY: ${GOOGLE_API_KEY}
      # ... all other providers
The gateway uses these credentials to authenticate API requests from agents and skills.

Adding providers after generation

To add a new AI provider to an existing stack:
  1. Add the API key to .env:
    echo "OPENROUTER_API_KEY=sk-or-v1-..." >> .env
    
  2. Update the gateway environment in docker-compose.yml:
    services:
      openclaw-gateway:
        environment:
          OPENROUTER_API_KEY: ${OPENROUTER_API_KEY}
    
  3. Restart the gateway:
    docker compose up -d openclaw-gateway
    

Testing provider connections

Verify API keys work correctly:
# OpenAI
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY"

# Anthropic
curl https://api.anthropic.com/v1/messages \
  -H "x-api-key: $ANTHROPIC_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "content-type: application/json" \
  -d '{"model":"claude-3-5-sonnet-20241022","max_tokens":1024,"messages":[{"role":"user","content":"Hello"}]}'

# Google Gemini
curl "https://generativelanguage.googleapis.com/v1/models?key=$GOOGLE_API_KEY"

# Ollama (local)
curl http://localhost:11434/api/tags

Security best practices

Never commit API keys to version control.
  • Keep .env out of git (already in .gitignore)
  • Use .env.example as a template without secrets
  • Rotate API keys regularly
  • Use environment-specific keys (dev/staging/prod)
  • Monitor API usage and billing
  • Set spending limits where available

Rate limiting

API providers enforce rate limits. Handle rate limit errors in your skills:
try {
  const response = await anthropic.messages.create({...});
} catch (error) {
  if (error.status === 429) {
    // Rate limit exceeded - implement exponential backoff
    await sleep(1000);
    // Retry request
  }
}
Consider adding Redis-backed rate limiting for production:
npx create-better-openclaw --services redis --yes

Build docs developers (and LLMs) love