Overview
Docker deployment packages the agent and all its dependencies into a container image, ensuring consistent behavior across different environments. This is the recommended approach for production deployments.
Dockerfile Architecture
The project uses a multi-stage Docker build for optimal image size and build caching:
Build Strategy
Multi-Stage Build
The Dockerfile uses two stages:
Builder Stage
Installs Poetry and creates the virtual environment with all dependencies: FROM --platform=linux/amd64 python:3.11.9-bookworm AS builder
RUN pip install poetry==1.8.2
ENV POETRY_NO_INTERACTION=1 \
POETRY_VIRTUALENVS_IN_PROJECT=1 \
POETRY_VIRTUALENVS_CREATE=1 \
POETRY_CACHE_DIR=/tmp/poetry_cache
WORKDIR /app
COPY pyproject.toml poetry.lock ./
RUN --mount=type=cache,target=$POETRY_CACHE_DIR poetry install --no-root
Runtime Stage
Creates a minimal runtime image with only the virtual environment and source code: FROM --platform=linux/amd64 python:3.11.9-bookworm AS runtime
RUN apt-get update && apt-get install -y ffmpeg libsm6 libxext6
ENV VIRTUAL_ENV=/app/.venv \
PATH= "/app/.venv/bin:$PATH"
WORKDIR /app
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
COPY pyproject.toml poetry.lock ./
COPY prediction_market_agent prediction_market_agent
COPY scripts scripts
COPY tests tests
COPY tokenizers tokenizers
Building the Image
Basic Build
Build the Docker image from the project root:
docker build -t gnosis-prediction-agent .
Build with Cache
The Dockerfile uses Docker’s cache mount for faster builds:
docker build \
--cache-from gnosis-prediction-agent:latest \
-t gnosis-prediction-agent:latest \
.
Build with Version Tag
Tag your builds with version numbers for better tracking:
docker build \
--build-arg LANGFUSE_DEPLOYMENT_VERSION= $( git rev-parse HEAD ) \
-t gnosis-prediction-agent:v1.0.0 \
.
Running Containers
Basic Run
Run a container with environment variables:
docker run \
-e BET_FROM_PRIVATE_KEY="your_private_key" \
-e OPENAI_API_KEY="your_openai_key" \
-e runnable_agent_name="prophet_gpt4o" \
-e market_type="omen" \
gnosis-prediction-agent:latest
Run with Environment File
Use a .env file for cleaner configuration:
docker run --env-file .env gnosis-prediction-agent:latest
Never commit your .env file to version control. Keep it in .gitignore.
Interactive Mode
Run a container with an interactive shell for debugging:
docker run -it \
--env-file .env \
gnosis-prediction-agent:latest \
bash
Override Command
Override the default command to run a specific agent:
docker run \
--env-file .env \
gnosis-prediction-agent:latest \
python prediction_market_agent/run_agent.py coinflip omen
Environment Configuration
Required Variables
The container CMD uses environment variables to specify the agent:
CMD [ "bash" , "-c" , "python prediction_market_agent/run_agent.py ${runnable_agent_name} ${market_type}" ]
docker run \
-e runnable_agent_name="prophet_gpt4o" \
-e market_type="omen" \
--env-file .env \
gnosis-prediction-agent:latest
System Environment Variables
The Docker image sets several system-level environment variables:
Python module search path, set to the application directory
PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION
Use pure Python implementation of Protocol Buffers
TRANSFORMERS_NO_ADVISORY_WARNINGS
Disable transformers warnings (we only use for tokenization, not PyTorch)
LANGFUSE_DEPLOYMENT_VERSION
Deployment version for Langfuse tracing (set via build arg in CI/CD)
System Dependencies
The runtime image includes system packages required by various Python libraries:
RUN apt-get update && apt-get install -y ffmpeg libsm6 libxext6
ffmpeg - Media processing (used by some AI models)
libsm6 - Session management library
libxext6 - X11 extensions library
Docker Compose
Single Agent Setup
Create a docker-compose.yml for easier management:
version : '3.8'
services :
prophet-agent :
build : .
image : gnosis-prediction-agent:latest
environment :
- runnable_agent_name=prophet_gpt4o
- market_type=omen
env_file :
- .env
restart : unless-stopped
logging :
driver : "json-file"
options :
max-size : "10m"
max-file : "3"
Run with:
Multiple Agents
Run multiple agents simultaneously:
version : '3.8'
services :
prophet-gpt4o :
build : .
image : gnosis-prediction-agent:latest
environment :
- runnable_agent_name=prophet_gpt4o
- market_type=omen
env_file :
- .env
restart : unless-stopped
microchain :
build : .
image : gnosis-prediction-agent:latest
environment :
- runnable_agent_name=microchain
- market_type=omen
env_file :
- .env
restart : unless-stopped
social-media :
build : .
image : gnosis-prediction-agent:latest
environment :
- runnable_agent_name=social_media
- market_type=omen
env_file :
- .env
restart : unless-stopped
With Database
Add a PostgreSQL database for agents that need persistence:
version : '3.8'
services :
database :
image : postgres:15
environment :
- POSTGRES_DB=prediction_agent
- POSTGRES_USER=agent
- POSTGRES_PASSWORD=secure_password
volumes :
- postgres_data:/var/lib/postgresql/data
restart : unless-stopped
agent :
build : .
image : gnosis-prediction-agent:latest
environment :
- runnable_agent_name=prophet_gpt4o
- market_type=omen
- SQLALCHEMY_DB_URL=postgresql://agent:secure_password@database:5432/prediction_agent
env_file :
- .env
depends_on :
- database
restart : unless-stopped
volumes :
postgres_data :
CI/CD Integration
GitHub Actions
The project includes automated Docker builds in .github/workflows/python_cd.yaml:
name : Python CD
on :
pull_request :
push :
branches : [ main ]
env :
REGISTRY : ghcr.io
IMAGE_NAME : ${{ github.repository }}
jobs :
build-and-push-image :
if : (contains(github.event.pull_request.body, 'build please') && github.event_name == 'pull_request') || (github.event_name == 'push' && github.ref == 'refs/heads/main')
runs-on : ubuntu-latest
permissions :
contents : read
packages : write
steps :
- name : Checkout Repository
uses : actions/checkout@v3
- name : Log in to the Container registry
uses : docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
with :
registry : ${{ env.REGISTRY }}
username : ${{ github.actor }}
password : ${{ secrets.GITHUB_TOKEN }}
- name : Extract metadata (tags, labels) for Docker
id : meta
uses : docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
with :
images : ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name : Build and push Docker image
uses : docker/build-push-action@4a13e500e55cf31b7a5d59a38ab2040ab0f42f56
with :
push : true
tags : ${{ steps.meta.outputs.tags }}
labels : ${{ steps.meta.outputs.labels }}
build-args : |
LANGFUSE_DEPLOYMENT_VERSION=${{ github.sha }}
Image Registry
Images are automatically pushed to GitHub Container Registry (ghcr.io):
# Pull the latest image
docker pull ghcr.io/gnosis/prediction-market-agent:main
# Run the image
docker run --env-file .env ghcr.io/gnosis/prediction-market-agent:main
Trigger Builds
Automatic on Main
Pushes to the main branch automatically trigger builds
Manual PR Builds
Add “build please” to PR description to trigger a build: ## Changes
- Updated agent logic
- Fixed bug in market parsing
build please
Image Optimization
Size Reduction
The multi-stage build reduces image size significantly:
Without Multi-Stage ~2.5 GB (includes Poetry and build tools)
With Multi-Stage ~1.2 GB (runtime dependencies only)
Layer Caching
Optimize build times by ordering Dockerfile commands strategically:
Install system dependencies (rarely changes)
Copy pyproject.toml and poetry.lock (changes occasionally)
Install Python dependencies (cached until lockfile changes)
Copy source code (changes frequently)
Build Cache Mount
The Poetry cache is mounted during build to avoid re-downloading packages:
RUN --mount=type=cache,target=$POETRY_CACHE_DIR poetry install --no-root
Monitoring and Logs
View Logs
# Follow logs
docker logs -f < container_i d >
# Last 100 lines
docker logs --tail 100 < container_i d >
# With timestamps
docker logs -t < container_i d >
Container Stats
# Real-time stats
docker stats < container_i d >
# All containers
docker stats
Health Checks
Add a health check to your Docker Compose:
services :
agent :
build : .
healthcheck :
test : [ "CMD" , "python" , "-c" , "import sys; sys.exit(0)" ]
interval : 30s
timeout : 10s
retries : 3
start_period : 40s
Troubleshooting
If the build fails, try clearing the build cache: docker builder prune
docker build --no-cache -t gnosis-prediction-agent .
Increase Docker memory limits in Docker Desktop settings or add resource limits: services :
agent :
build : .
deploy :
resources :
limits :
memory : 4G
Missing Environment Variables
Ensure all required variables are set. Check logs: docker logs < container_i d > 2>&1 | grep -i "error\|missing"
Next Steps
Cloud Deployment Deploy containers to Google Kubernetes Engine (GKE)
Environment Config Complete environment variable reference
Local Development Run agents locally without Docker
Contributing Contribute to the project on GitHub