Quick Start
Get DeerFlow up and running in just a few steps. This guide covers both Docker (recommended) and local development setups.
Prerequisites: Git, and either Docker (for Docker setup) or Node.js 22+, pnpm, uv, and nginx (for local development).
Step 1: Clone the Repository
First, clone the DeerFlow repository:
git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
Step 2: Configuration
Generate Configuration Files
Run the following command from the project root directory:
This command creates local configuration files based on example templates:
config.yaml - Main application configuration
.env - Environment variables
frontend/.env - Frontend environment variables
The make config command will abort if configuration files already exist to prevent overwriting your settings.
Edit config.yaml and define at least one model. Here’s an example with OpenAI’s GPT-4:
models:
- name: gpt-4 # Internal identifier
display_name: GPT-4 # Human-readable name
use: langchain_openai:ChatOpenAI # LangChain class path
model: gpt-4 # Model identifier for API
api_key: $OPENAI_API_KEY # API key (use env var)
max_tokens: 4096 # Maximum tokens per request
temperature: 0.7 # Sampling temperature
supports_vision: true # Enable vision support
Environment Variables: Config values starting with $ are resolved from environment variables (e.g., $OPENAI_API_KEY).
Set API Keys
Choose one of the following methods to configure your API keys:
Edit the .env file in the project root:TAVILY_API_KEY=your-tavily-api-key
OPENAI_API_KEY=your-openai-api-key
ANTHROPIC_API_KEY=your-anthropic-api-key
# Add other provider keys as needed
Export environment variables in your shell:export OPENAI_API_KEY=your-openai-api-key
export TAVILY_API_KEY=your-tavily-api-key
Edit config.yaml directly (not recommended for production):models:
- name: gpt-4
api_key: sk-your-actual-api-key-here # Replace placeholder
Not recommended for production. Use environment variables instead.
Step 3: Running the Application
Docker (Recommended)
Local Development
The fastest way to get started with a consistent environment.Initialize Docker
Pull the sandbox image (only needed once or when the image updates):This downloads the sandbox container image (~500MB+) used for isolated code execution. Start Services
Start all services:This command automatically detects your sandbox mode from config.yaml and starts the appropriate services:
- Local/Docker sandbox mode: Starts frontend, gateway, langgraph, and nginx
- Provisioner mode: Also starts the provisioner service for Kubernetes-based sandboxes
Access DeerFlow
Open your browser and navigate to:You should see the DeerFlow chat interface. Useful Docker Commands
# Stop all services
make docker-stop
# View all logs
make docker-logs
# View frontend logs only
make docker-logs-frontend
# View gateway logs only
make docker-logs-gateway
Run services locally without Docker for faster iteration.Check Prerequisites
Verify all required tools are installed:This verifies:
- Node.js 22+
- pnpm (JavaScript package manager)
- uv (Python package manager)
- nginx (reverse proxy)
If any tools are missing, the command provides installation instructions. Install Dependencies
Install frontend and backend dependencies:This runs:
cd backend && uv sync - Backend Python dependencies
cd frontend && pnpm install - Frontend JavaScript dependencies
(Optional) Pre-pull Sandbox Image
If using Docker-based sandbox, pre-pull the container image:This is optional but recommended to avoid delays during first use. Start Services
Start all services in development mode:This starts:
- LangGraph Server (port 2024) - Agent runtime
- Gateway API (port 8001) - REST API for models, skills, memory
- Frontend (port 3000) - Next.js web interface
- Nginx (port 2026) - Reverse proxy
Access DeerFlow
Open your browser and navigate to:You should see the DeerFlow chat interface. Development Workflow
Logs are written to the logs/ directory:# View logs
tail -f logs/langgraph.log
tail -f logs/gateway.log
tail -f logs/frontend.log
tail -f logs/nginx.log
# Stop all services
make stop
# Clean up logs and processes
make clean
Press Ctrl+C in the terminal running make dev to stop all services gracefully.
Step 4: Verify Installation
Once DeerFlow is running, verify the installation:
Check the Interface
Navigate to http://localhost:2026 and ensure the chat interface loads.
Send a Test Message
Type a simple message like “Hello, can you help me?” and verify the agent responds.
Check Model Configuration
Look for the model selector in the interface to confirm your configured models are available.
Advanced Configuration
Sandbox Mode
DeerFlow supports multiple sandbox execution modes:
Runs sandbox code directly on the host machine. Simple but less isolated.sandbox:
use: src.sandbox.local:LocalSandboxProvider
Runs sandbox code in isolated Docker containers. Recommended for most use cases.sandbox:
use: src.community.aio_sandbox:AioSandboxProvider
# Optional: Container image to use
# image: enterprise-public-cn-beijing.cr.volces.com/vefaas-public/all-in-one-sandbox:latest
# Optional: Base port for sandbox containers
# port: 8080
# Optional: Auto-start containers
# auto_start: true
# Optional: Container name prefix
# container_prefix: deer-flow-sandbox
Platform Support:
- macOS: Automatically uses Apple Container if available, falls back to Docker
- Linux/Windows: Uses Docker
Each sandbox gets a dedicated Pod in Kubernetes, managed by the provisioner service. For production or advanced users.sandbox:
use: src.community.aio_sandbox:AioSandboxProvider
provisioner_url: http://provisioner:8002
When using provisioner mode, make docker-start automatically starts the provisioner service.
MCP Servers
DeerFlow supports configurable MCP (Model Context Protocol) servers to extend capabilities.
Supported transports:
- stdio - Command-based servers (e.g., GitHub, filesystem)
- HTTP - REST API servers with OAuth support
- SSE - Server-Sent Events servers
{
"mcpServers": {
"github": {
"enabled": true,
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {"GITHUB_TOKEN": "$GITHUB_TOKEN"}
}
}
}
Common Issues
If you see errors about ports 2024, 2026, 3000, or 8001 being in use:# Find and kill processes using these ports
lsof -ti:2026 | xargs kill -9
lsof -ti:2024 | xargs kill -9
lsof -ti:8001 | xargs kill -9
lsof -ti:3000 | xargs kill -9
# Or use make clean
make clean
If make check reports missing tools:# Install Node.js 22+
# Visit: https://nodejs.org/
# Install pnpm
npm install -g pnpm
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install nginx
# macOS:
brew install nginx
# Ubuntu:
sudo apt install nginx
Ensure config.yaml is in the project root:# Check if config exists
ls -la config.yaml
# If missing, run make config
make config
Config search order:
DEER_FLOW_CONFIG_PATH environment variable (if set)
backend/config.yaml (current directory)
config.yaml (parent directory - recommended)
If make docker-init fails to pull the sandbox image:# Try pulling manually
docker pull enterprise-public-cn-beijing.cr.volces.com/vefaas-public/all-in-one-sandbox:latest
# Or use a mirror (China users)
# Update config.yaml:
sandbox:
image: your-mirror-registry/all-in-one-sandbox:latest
Verify your API keys are correctly set:# Check environment variables
echo $OPENAI_API_KEY
# Verify .env file
cat .env
# Test config loading (from backend directory)
cd backend
python -c "from src.config import get_app_config; print(get_app_config().models[0].api_key)"
What’s Next?
Now that DeerFlow is running, explore these guides:
Configuration Guide
Deep dive into models, tools, sandbox, and memory configuration
Skills Management
Learn how to use, create, and install custom skills
Architecture
Understand DeerFlow’s technical architecture
API Reference
Complete API documentation for integration