Skip to main content

Quick Start

Get DeerFlow up and running in just a few steps. This guide covers both Docker (recommended) and local development setups.
Prerequisites: Git, and either Docker (for Docker setup) or Node.js 22+, pnpm, uv, and nginx (for local development).

Step 1: Clone the Repository

First, clone the DeerFlow repository:
git clone https://github.com/bytedance/deer-flow.git
cd deer-flow

Step 2: Configuration

Generate Configuration Files

Run the following command from the project root directory:
make config
This command creates local configuration files based on example templates:
  • config.yaml - Main application configuration
  • .env - Environment variables
  • frontend/.env - Frontend environment variables
The make config command will abort if configuration files already exist to prevent overwriting your settings.

Configure Your Model

Edit config.yaml and define at least one model. Here’s an example with OpenAI’s GPT-4:
models:
  - name: gpt-4                       # Internal identifier
    display_name: GPT-4               # Human-readable name
    use: langchain_openai:ChatOpenAI  # LangChain class path
    model: gpt-4                      # Model identifier for API
    api_key: $OPENAI_API_KEY          # API key (use env var)
    max_tokens: 4096                  # Maximum tokens per request
    temperature: 0.7                  # Sampling temperature
    supports_vision: true             # Enable vision support
Environment Variables: Config values starting with $ are resolved from environment variables (e.g., $OPENAI_API_KEY).

Set API Keys

Choose one of the following methods to configure your API keys:

Step 3: Running the Application

Step 4: Verify Installation

Once DeerFlow is running, verify the installation:
1

Check the Interface

Navigate to http://localhost:2026 and ensure the chat interface loads.
2

Send a Test Message

Type a simple message like “Hello, can you help me?” and verify the agent responds.
3

Check Model Configuration

Look for the model selector in the interface to confirm your configured models are available.

Advanced Configuration

Sandbox Mode

DeerFlow supports multiple sandbox execution modes:
Runs sandbox code directly on the host machine. Simple but less isolated.
config.yaml
sandbox:
  use: src.sandbox.local:LocalSandboxProvider
See the Sandbox Configuration Guide for detailed instructions.

MCP Servers

DeerFlow supports configurable MCP (Model Context Protocol) servers to extend capabilities. Supported transports:
  • stdio - Command-based servers (e.g., GitHub, filesystem)
  • HTTP - REST API servers with OAuth support
  • SSE - Server-Sent Events servers
{
  "mcpServers": {
    "github": {
      "enabled": true,
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {"GITHUB_TOKEN": "$GITHUB_TOKEN"}
    }
  }
}
See the MCP Server Guide for detailed setup instructions.

Common Issues

If you see errors about ports 2024, 2026, 3000, or 8001 being in use:
# Find and kill processes using these ports
lsof -ti:2026 | xargs kill -9
lsof -ti:2024 | xargs kill -9
lsof -ti:8001 | xargs kill -9
lsof -ti:3000 | xargs kill -9

# Or use make clean
make clean
If make check reports missing tools:
# Install Node.js 22+
# Visit: https://nodejs.org/

# Install pnpm
npm install -g pnpm

# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install nginx
# macOS:
brew install nginx
# Ubuntu:
sudo apt install nginx
Ensure config.yaml is in the project root:
# Check if config exists
ls -la config.yaml

# If missing, run make config
make config
Config search order:
  1. DEER_FLOW_CONFIG_PATH environment variable (if set)
  2. backend/config.yaml (current directory)
  3. config.yaml (parent directory - recommended)
If make docker-init fails to pull the sandbox image:
# Try pulling manually
docker pull enterprise-public-cn-beijing.cr.volces.com/vefaas-public/all-in-one-sandbox:latest

# Or use a mirror (China users)
# Update config.yaml:
sandbox:
  image: your-mirror-registry/all-in-one-sandbox:latest
Verify your API keys are correctly set:
# Check environment variables
echo $OPENAI_API_KEY

# Verify .env file
cat .env

# Test config loading (from backend directory)
cd backend
python -c "from src.config import get_app_config; print(get_app_config().models[0].api_key)"

What’s Next?

Now that DeerFlow is running, explore these guides:

Configuration Guide

Deep dive into models, tools, sandbox, and memory configuration

Skills Management

Learn how to use, create, and install custom skills

Architecture

Understand DeerFlow’s technical architecture

API Reference

Complete API documentation for integration
Need help? Report issues at github.com/bytedance/deer-flow/issues

Build docs developers (and LLMs) love