Skip to main content

Overview

Base Audit Bot can be deployed in multiple ways depending on your infrastructure preferences. This guide covers deployment strategies, production considerations, and best practices.

Deployment Options

Docker Compose

Recommended for most use cases. Simple, reproducible, and includes volume management.

Docker

Direct Docker deployment for custom orchestration or Kubernetes.

Python

Run directly with Python for development or simple deployments.

Cloud

Deploy to cloud platforms like AWS, GCP, or Azure.

Docker Compose Deployment

Recommended approach - Includes automatic restart, volume management, health checks, and log rotation.

Prerequisites

  • Docker Engine 20.10+
  • Docker Compose 2.0+
  • Configured .env file

Deployment Steps

1

Configure environment

Copy and configure your .env file:
cp .env.example .env
nano .env  # Edit with your API keys
2

Create data directories

mkdir -p data logs
3

Build and start the container

docker-compose up -d
This will:
  • Build the Docker image from Dockerfile
  • Create persistent volumes for data and logs
  • Start the container in detached mode
  • Enable automatic restart on failure
4

Verify deployment

Check container status:
docker-compose ps
View logs:
docker-compose logs -f
Check health:
curl http://localhost:5000/health

Container Configuration

The docker-compose.yml includes production-ready settings:
services:
  audit-bot:
    restart: unless-stopped  # Automatic restart
    ports:
      - "5000:5000"          # Webhook server
    volumes:
      - ./data:/app/data     # Persistent database
      - ./logs:/app/logs     # Persistent logs
      - bot-temp:/app/temp_repos  # Ephemeral clones
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
    logging:
      driver: "json-file"
      options:
        max-size: "10m"      # Rotate at 10MB
        max-file: "3"        # Keep 3 files

Management Commands

# Start the bot
docker-compose up -d

# Stop the bot
docker-compose down

# Restart the bot
docker-compose restart

# View logs
docker-compose logs -f

# View recent logs
docker-compose logs --tail=100

# Update to latest code
git pull
docker-compose build
docker-compose up -d

# Clean rebuild
docker-compose down
docker-compose build --no-cache
docker-compose up -d

Standalone Docker

For custom orchestration or Kubernetes deployments.
1

Build the image

docker build -t base-audit-bot .
2

Run the container

docker run -d \
  --name base-audit-bot \
  --restart unless-stopped \
  -p 5000:5000 \
  -v $(pwd)/data:/app/data \
  -v $(pwd)/logs:/app/logs \
  --env-file .env \
  base-audit-bot

Python Deployment

For development or simple production deployments without Docker.
1

Install Python 3.11+

python --version  # Verify Python 3.11+
2

Create virtual environment

python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
3

Install dependencies

pip install -r requirements.txt
4

Configure environment

cp .env.example .env
nano .env
5

Run the bot

python bot.py
Use a process manager like systemd, supervisor, or pm2 for production.

Systemd Service (Linux)

Create /etc/systemd/system/base-audit-bot.service:
[Unit]
Description=Base Audit Bot
After=network.target

[Service]
Type=simple
User=botuser
WorkingDirectory=/opt/base-audit-bot
Environment="PATH=/opt/base-audit-bot/venv/bin"
ExecStart=/opt/base-audit-bot/venv/bin/python bot.py
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl enable base-audit-bot
sudo systemctl start base-audit-bot
sudo systemctl status base-audit-bot

Cloud Deployment

AWS (ECS/EC2)

1

Push image to ECR

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <account>.dkr.ecr.us-east-1.amazonaws.com
docker tag base-audit-bot:latest <account>.dkr.ecr.us-east-1.amazonaws.com/base-audit-bot:latest
docker push <account>.dkr.ecr.us-east-1.amazonaws.com/base-audit-bot:latest
2

Create ECS task definition

Configure environment variables in ECS task definition or use AWS Secrets Manager.
3

Deploy to ECS

Use ECS service with:
  • Desired count: 1
  • Load balancer: Optional (for webhook endpoint)
  • Auto-restart enabled
Use AWS Secrets Manager for secure credential storage:
aws secretsmanager create-secret --name base-audit-bot/config --secret-string file://.env

Google Cloud (Cloud Run)

# Build and deploy
gcloud builds submit --tag gcr.io/PROJECT-ID/base-audit-bot
gcloud run deploy base-audit-bot \
  --image gcr.io/PROJECT-ID/base-audit-bot \
  --platform managed \
  --region us-central1 \
  --allow-unauthenticated \
  --env-vars-file .env.yaml

DigitalOcean App Platform

  1. Connect your GitHub repository
  2. Configure environment variables in the dashboard
  3. Set port to 5000
  4. Deploy

Production Considerations

Security

Critical security practices:
  • Never expose .env file or commit to version control
  • Use strong webhook secrets (32+ random characters)
  • Restrict webhook endpoint to GitHub IPs if possible
  • Enable HTTPS for webhook endpoint
  • Rotate API keys regularly
  • Monitor logs for suspicious activity

Networking

Required ports:
  • 5000/tcp: Webhook server (inbound)
  • 443/tcp: HTTPS for API calls (outbound)
# Example: UFW firewall
sudo ufw allow 5000/tcp
For production webhook endpoints:
server {
    listen 443 ssl;
    server_name bot.yourdomain.com;
    
    ssl_certificate /etc/letsencrypt/live/bot.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/bot.yourdomain.com/privkey.pem;
    
    location /webhook/github {
        proxy_pass http://localhost:5000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
    
    location /health {
        proxy_pass http://localhost:5000;
    }
}

Monitoring

1

Health checks

Monitor the health endpoint:
curl http://localhost:5000/health
Expected response:
{"status": "healthy", "timestamp": "2026-03-03T12:00:00.000000"}
2

Log monitoring

Set up log aggregation:
  • Docker: Use json-file driver with rotation (already configured)
  • Production: Forward to ELK, Datadog, or CloudWatch
  • Alert on ERROR and CRITICAL logs
3

Database monitoring

Monitor database growth:
ls -lh data/bot.db
Check recent activity:
sqlite3 data/bot.db "SELECT COUNT(*) FROM audits WHERE date(audit_date) = date('now')"

Resource Requirements

Recommended minimum resources:
  • CPU: 1 core
  • RAM: 1 GB
  • Disk: 10 GB (for database, logs, temp repos)
  • Network: Stable internet connection with low latency to RPC endpoint

Backup Strategy

Backup the database regularly to prevent data loss.
# Automated daily backup
#!/bin/bash
# backup.sh

BACKUP_DIR="/backups/base-audit-bot"
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p $BACKUP_DIR
cp data/bot.db "$BACKUP_DIR/bot_$DATE.db"

# Keep last 30 days
find $BACKUP_DIR -name "bot_*.db" -mtime +30 -delete
Add to crontab:
0 2 * * * /opt/base-audit-bot/backup.sh

Scaling Considerations

The bot is designed to run as a single instance. For high-volume deployments:
  • Use multiple instances with different contract filters
  • Implement distributed locking for database writes
  • Consider Redis for shared state
  • Use message queue for audit requests

Updating the Bot

1

Pull latest changes

git pull origin main
2

Rebuild (Docker)

docker-compose down
docker-compose build
docker-compose up -d
3

Update dependencies (Python)

pip install -r requirements.txt --upgrade
4

Verify

docker-compose logs -f  # or check Python logs
curl http://localhost:5000/health

Troubleshooting Deployment

See the Troubleshooting Guide for common deployment issues and solutions.

Build docs developers (and LLMs) love