Skip to main content

Overview

Docker development provides the most consistent and hassle-free DeerFlow experience. All dependencies are pre-configured in containers, eliminating environment-specific issues.
Docker is the recommended approach for most developers. You don’t need to install Node.js, Python, or nginx on your local machine.

Benefits

Consistency

Same environment across different machines and operating systems

Isolation

Services run in isolated containers without affecting your system

No Local Dependencies

No need to install Node.js, Python, uv, pnpm, or nginx locally

Easy Cleanup

Simple to reset and clean up with Docker commands

Prerequisites

Required

Optional

  • pnpm - For dependency caching optimization
    npm install -g pnpm
    

Docker Architecture

The Docker development environment consists of multiple services:
Host Machine

Docker Compose (deer-flow-dev)
  ├→ nginx (port 2026) ← Reverse proxy
  ├→ web (port 3000) ← Frontend with hot-reload
  ├→ api (port 8001) ← Gateway API with hot-reload
  ├→ langgraph (port 2024) ← LangGraph server with hot-reload
  └→ provisioner (optional, port 8002) ← Kubernetes sandbox provisioner
All services have hot-reload enabled, so your code changes are automatically reflected without restarting containers.

Setup Steps

1

Configure the Application

Create and configure your settings:
# Copy example configuration
cp config.example.yaml config.yaml
Edit config.yaml to configure your model and API keys:
config.yaml
models:
  - name: gpt-4
    display_name: GPT-4
    use: langchain_openai:ChatOpenAI
    model: gpt-4
    api_key: $OPENAI_API_KEY
    max_tokens: 4096
Set your API keys:
export OPENAI_API_KEY="your-key-here"
# Or edit .env file
2

Initialize Docker Environment

Build Docker images and install dependencies (first time only):
make docker-init
This command:
  • Builds Docker images for all services
  • Installs frontend dependencies using pnpm
  • Installs backend dependencies using uv
  • Shares pnpm cache with host for faster builds
  • Pre-pulls the sandbox container image
This step may take several minutes on the first run as it downloads and builds everything.
3

Start Development Services

Start all services with hot-reload:
make docker-start
DeerFlow automatically detects your sandbox mode from config.yaml:
  • Local/Docker sandbox: Starts nginx, frontend, gateway, and langgraph
  • Provisioner/Kubernetes sandbox: Additionally starts the provisioner service
All services start with hot-reload enabled:
  • Frontend changes reload automatically
  • Backend changes trigger automatic restart
  • LangGraph server supports hot-reload
4

Access the Application

Once all services are running, access DeerFlow at:

Docker Commands

Essential Commands

make docker-start
# Starts all Docker services in detached mode

Direct Docker Compose Commands

You can also use Docker Compose directly for more control:
# Start services in foreground (see live logs)
docker-compose -f docker/docker-compose-dev.yaml up

# Rebuild specific service
docker-compose -f docker/docker-compose-dev.yaml up --build gateway

# View logs for specific service
docker-compose -f docker/docker-compose-dev.yaml logs -f langgraph

# Execute command in running container
docker-compose -f docker/docker-compose-dev.yaml exec gateway bash

# Stop and remove all containers
docker-compose -f docker/docker-compose-dev.yaml down

Sandbox Modes

The Docker environment supports multiple sandbox execution modes, automatically detected from your config.yaml:
Runs sandbox code directly on the host machine.
config.yaml
sandbox:
  use: src.sandbox.local:LocalSandboxProvider
Docker services started:
  • nginx, frontend, gateway, langgraph

Development Workflow

Making Code Changes

All services have hot-reload enabled:
1

Edit Code

Make changes to source files:
  • Frontend: frontend/src/**
  • Backend: backend/src/**
  • Config: config.yaml
2

See Changes Automatically

  • Frontend: Browser refreshes automatically
  • Backend: Services restart automatically
  • Config: Restart services with make docker-stop && make docker-start

Volume Mounts

These directories are mounted from your host to containers for live editing:
Mounted Directories:
  - frontend/src → /app/frontend/src
  - backend/src → /app/backend/src
  - config.yaml → /app/config.yaml
  - skills/ → /app/skills/
  - .env → /app/.env
  - backend/.deer-flow → /app/backend/.deer-flow (persistent data)

Dependency Caching

To speed up builds, these caches are mounted:
Cache Mounts:
  - ~/.local/share/pnpm/store → /root/.local/share/pnpm/store (pnpm)
  - ~/.cache/uv → /root/.cache/uv (Python uv)

Provisioner Setup (Advanced)

For Kubernetes-based sandbox isolation:
1

Set Up Kubernetes

Enable Kubernetes in Docker Desktop or install k3s/OrbStack.
2

Configure Environment

Set required environment variables:
export DEER_FLOW_ROOT=$(pwd)
3

Update config.yaml

Enable provisioner mode:
config.yaml
sandbox:
  use: src.community.aio_sandbox:AioSandboxProvider
  provisioner_url: http://provisioner:8002
4

Start Services

make docker-start
The provisioner service will start automatically.
See docker/provisioner/README.md for detailed configuration.

Troubleshooting

Check Docker is running:
docker ps
View detailed logs:
make docker-logs
Rebuild from scratch:
make docker-stop
docker-compose -f docker/docker-compose-dev.yaml down -v
make docker-init
make docker-start
Stop existing services:
make docker-stop
Or find and stop the conflicting process:
lsof -ti:2026 | xargs kill -9
Ensure volumes are mounted correctly:
docker-compose -f docker/docker-compose-dev.yaml config
Force restart:
make docker-stop
make docker-start
Clean up Docker resources:
# Remove stopped containers
docker container prune

# Remove unused images
docker image prune -a

# Remove all unused data
docker system prune -a --volumes

Next Steps

Local Development

Learn about running services without Docker

Configuration

Configure models, tools, and sandbox settings

Creating Skills

Extend DeerFlow with custom skills

File Uploads

Upload and process files in conversations

Build docs developers (and LLMs) love