API Key Issues
API key not recognized or treated as literal string
API key not recognized or treated as literal string
Problem: Config contains
$OPENAI_API_KEY literally instead of the actual key value.Solution:DeerFlow automatically resolves environment variables in config values that start with $.-
Correct syntax in config.yaml:
-
Verify environment variable is set:
-
Set environment variable properly:
Option A: .env file (recommended for Docker):
Option B: Shell export:Option C: Docker Compose env_file:
-
Verify variable is loaded in Python:
Invalid API key error from provider
Invalid API key error from provider
Problem:
401 Unauthorized or Invalid API key from OpenAI, Anthropic, etc.Solution:-
Verify API key is valid:
OpenAI:
Anthropic:
-
Common API key issues:
- Expired key: Generate new key from provider dashboard
- Wrong project: Ensure key has access to the model
- Rate limited: Check provider dashboard for limits
- Trailing spaces: Trim whitespace in .env file
-
Regenerate API key:
- OpenAI: https://platform.openai.com/api-keys
- Anthropic: https://console.anthropic.com/settings/keys
- DeepSeek: https://platform.deepseek.com/api_keys
- Google: https://aistudio.google.com/app/apikey
-
Check key format:
- OpenAI:
sk-proj-...(project keys) orsk-...(legacy) - Anthropic:
sk-ant-... - DeepSeek:
sk-... - Google: Usually starts with
AI...
- OpenAI:
-
Test with minimal config:
API key works in curl but not in DeerFlow
API key works in curl but not in DeerFlow
Problem: API key works when tested directly but fails in DeerFlow.Solution:
-
Check environment isolation:
-
For Docker deployment:
-
Verify config resolution:
-
Check for config typos:
Model Loading Errors
Provider module not found (ImportError)
Provider module not found (ImportError)
Problem:
ModuleNotFoundError: No module named 'langchain_openai' or similar.Solution:DeerFlow uses LangChain provider packages. Each provider must be installed separately.-
Install required provider:
-
Verify installation:
-
For custom/patched models:
-
Common provider packages:
Provider Package Import OpenAI langchain-openailangchain_openai:ChatOpenAIAnthropic langchain-anthropiclangchain_anthropic:ChatAnthropicGoogle langchain-google-genailangchain_google_genai:ChatGoogleGenerativeAIDeepSeek langchain-deepseeklangchain_deepseek:ChatDeepSeekAzure OpenAI langchain-openailangchain_openai:AzureChatOpenAI
Model name not found or invalid model ID
Model name not found or invalid model ID
Problem:
Model 'gpt-5' not found or InvalidRequestError: model not supported.Solution:-
Check model ID is correct for provider:
OpenAI models:
Valid OpenAI models (as of 2024):
gpt-4-turbo-previewgpt-4gpt-4-32kgpt-3.5-turbo
-
Verify model is available in your account:
-
Check for typos:
-
For OpenAI-compatible APIs (Novita, Ollama, etc.):
-
Test model ID directly:
Model features not working (vision, thinking, etc.)
Model features not working (vision, thinking, etc.)
Problem: Model doesn’t support features like image understanding or extended thinking.Solution:
-
Enable vision support:
Models with vision support:
- OpenAI:
gpt-4-turbo,gpt-4o,gpt-4-vision-preview - Anthropic:
claude-3-5-sonnet-20241022,claude-3-opus-20240229 - Google:
gemini-2.5-pro,gemini-1.5-pro
- OpenAI:
-
Enable thinking/reasoning mode:
-
Configure per-model extended thinking:
-
Verify model actually supports the feature:
Provider Configuration Problems
OpenAI-compatible API (Novita, Ollama, etc.) not working
OpenAI-compatible API (Novita, Ollama, etc.) not working
Problem: Custom OpenAI-compatible endpoint fails with authentication or model errors.Solution:
-
Use ChatOpenAI with base_url:
-
Test endpoint connectivity:
-
Common provider configurations:
Ollama (local):
LM Studio (local):vLLM server:
-
Check if endpoint requires /v1 suffix:
Azure OpenAI configuration issues
Azure OpenAI configuration issues
Problem: Azure OpenAI deployment fails with endpoint or authentication errors.Solution:
-
Use AzureChatOpenAI class:
-
Get values from Azure portal:
- azure_deployment: Your deployment name (e.g., “gpt-4-deployment”)
- azure_endpoint: Your resource endpoint
- api_key: Keys and Endpoint → KEY 1 or KEY 2
- api_version: Use latest from Azure docs
-
Test Azure endpoint:
-
Common Azure mistakes:
Rate limiting or quota errors
Rate limiting or quota errors
Problem:
429 Too Many Requests or Quota exceeded errors.Solution:-
Check rate limits in provider dashboard:
- OpenAI: https://platform.openai.com/account/limits
- Anthropic: https://console.anthropic.com/settings/limits
- DeepSeek: https://platform.deepseek.com/usage
-
Upgrade account tier:
- Many providers have higher limits for paid tiers
- OpenAI: Increase from free tier to paid
- Check if you need to add payment method
-
Implement retry logic (automatic in LangChain):
-
Use multiple models as fallback:
-
Monitor usage:
Model returns empty or truncated responses
Model returns empty or truncated responses
Problem: Model responses are cut off or empty.Solution:
-
Increase max_tokens:
-
Check model’s actual limits:
- GPT-4: 8192 output tokens max
- GPT-4 Turbo: 4096 output tokens max
- Claude 3.5 Sonnet: 8192 output tokens max
- DeepSeek V3: 8192 output tokens max
-
Set appropriate temperature:
-
Test model directly:
Next Steps
- Common Issues - General troubleshooting
- Performance Optimization - Improve model performance
- Configuration Guide - Complete model configuration reference