Overview
Base Audit Bot can be deployed in multiple ways depending on your infrastructure preferences. This guide covers deployment strategies, production considerations, and best practices.
Deployment Options
Docker Compose Recommended for most use cases. Simple, reproducible, and includes volume management.
Docker Direct Docker deployment for custom orchestration or Kubernetes.
Python Run directly with Python for development or simple deployments.
Cloud Deploy to cloud platforms like AWS, GCP, or Azure.
Docker Compose Deployment
Recommended approach - Includes automatic restart, volume management, health checks, and log rotation.
Prerequisites
Docker Engine 20.10+
Docker Compose 2.0+
Configured .env file
Deployment Steps
Configure environment
Copy and configure your .env file: cp .env.example .env
nano .env # Edit with your API keys
Build and start the container
This will:
Build the Docker image from Dockerfile
Create persistent volumes for data and logs
Start the container in detached mode
Enable automatic restart on failure
Verify deployment
Check container status: View logs: Check health: curl http://localhost:5000/health
Container Configuration
The docker-compose.yml includes production-ready settings:
services :
audit-bot :
restart : unless-stopped # Automatic restart
ports :
- "5000:5000" # Webhook server
volumes :
- ./data:/app/data # Persistent database
- ./logs:/app/logs # Persistent logs
- bot-temp:/app/temp_repos # Ephemeral clones
healthcheck :
test : [ "CMD" , "curl" , "-f" , "http://localhost:5000/health" ]
interval : 30s
timeout : 10s
retries : 3
start_period : 10s
logging :
driver : "json-file"
options :
max-size : "10m" # Rotate at 10MB
max-file : "3" # Keep 3 files
Management Commands
# Start the bot
docker-compose up -d
# Stop the bot
docker-compose down
# Restart the bot
docker-compose restart
# View logs
docker-compose logs -f
# View recent logs
docker-compose logs --tail=100
# Update to latest code
git pull
docker-compose build
docker-compose up -d
# Clean rebuild
docker-compose down
docker-compose build --no-cache
docker-compose up -d
Standalone Docker
For custom orchestration or Kubernetes deployments.
Build the image
docker build -t base-audit-bot .
Run the container
docker run -d \
--name base-audit-bot \
--restart unless-stopped \
-p 5000:5000 \
-v $( pwd ) /data:/app/data \
-v $( pwd ) /logs:/app/logs \
--env-file .env \
base-audit-bot
Python Deployment
For development or simple production deployments without Docker.
Install Python 3.11+
python --version # Verify Python 3.11+
Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
Install dependencies
pip install -r requirements.txt
Configure environment
cp .env.example .env
nano .env
Run the bot
Use a process manager like systemd, supervisor, or pm2 for production.
Systemd Service (Linux)
Create /etc/systemd/system/base-audit-bot.service:
[Unit]
Description =Base Audit Bot
After =network.target
[Service]
Type =simple
User =botuser
WorkingDirectory =/opt/base-audit-bot
Environment = "PATH=/opt/base-audit-bot/venv/bin"
ExecStart =/opt/base-audit-bot/venv/bin/python bot.py
Restart =always
RestartSec =10
[Install]
WantedBy =multi-user.target
Enable and start:
sudo systemctl enable base-audit-bot
sudo systemctl start base-audit-bot
sudo systemctl status base-audit-bot
Cloud Deployment
AWS (ECS/EC2)
Push image to ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin < accoun t > .dkr.ecr.us-east-1.amazonaws.com
docker tag base-audit-bot:latest < accoun t > .dkr.ecr.us-east-1.amazonaws.com/base-audit-bot:latest
docker push < accoun t > .dkr.ecr.us-east-1.amazonaws.com/base-audit-bot:latest
Create ECS task definition
Configure environment variables in ECS task definition or use AWS Secrets Manager.
Deploy to ECS
Use ECS service with:
Desired count: 1
Load balancer: Optional (for webhook endpoint)
Auto-restart enabled
Use AWS Secrets Manager for secure credential storage: aws secretsmanager create-secret --name base-audit-bot/config --secret-string file://.env
Google Cloud (Cloud Run)
# Build and deploy
gcloud builds submit --tag gcr.io/PROJECT-ID/base-audit-bot
gcloud run deploy base-audit-bot \
--image gcr.io/PROJECT-ID/base-audit-bot \
--platform managed \
--region us-central1 \
--allow-unauthenticated \
--env-vars-file .env.yaml
Connect your GitHub repository
Configure environment variables in the dashboard
Set port to 5000
Deploy
Production Considerations
Security
Critical security practices:
Never expose .env file or commit to version control
Use strong webhook secrets (32+ random characters)
Restrict webhook endpoint to GitHub IPs if possible
Enable HTTPS for webhook endpoint
Rotate API keys regularly
Monitor logs for suspicious activity
Networking
Required ports:
5000/tcp : Webhook server (inbound)
443/tcp : HTTPS for API calls (outbound)
# Example: UFW firewall
sudo ufw allow 5000/tcp
For production webhook endpoints: server {
listen 443 ssl;
server_name bot.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/bot.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/bot.yourdomain.com/privkey.pem;
location /webhook/github {
proxy_pass http://localhost:5000;
proxy_set_header Host $ host ;
proxy_set_header X-Real-IP $ remote_addr ;
proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for ;
proxy_set_header X-Forwarded-Proto $ scheme ;
}
location /health {
proxy_pass http://localhost:5000;
}
}
Monitoring
Health checks
Monitor the health endpoint: curl http://localhost:5000/health
Expected response: { "status" : "healthy" , "timestamp" : "2026-03-03T12:00:00.000000" }
Log monitoring
Set up log aggregation:
Docker: Use json-file driver with rotation (already configured)
Production: Forward to ELK, Datadog, or CloudWatch
Alert on ERROR and CRITICAL logs
Database monitoring
Monitor database growth: Check recent activity: sqlite3 data/bot.db "SELECT COUNT(*) FROM audits WHERE date(audit_date) = date('now')"
Resource Requirements
Recommended minimum resources:
CPU: 1 core
RAM: 1 GB
Disk: 10 GB (for database, logs, temp repos)
Network: Stable internet connection with low latency to RPC endpoint
Backup Strategy
Backup the database regularly to prevent data loss.
# Automated daily backup
#!/bin/bash
# backup.sh
BACKUP_DIR = "/backups/base-audit-bot"
DATE = $( date +%Y%m%d_%H%M%S )
mkdir -p $BACKUP_DIR
cp data/bot.db " $BACKUP_DIR /bot_ $DATE .db"
# Keep last 30 days
find $BACKUP_DIR -name "bot_*.db" -mtime +30 -delete
Add to crontab:
0 2 * * * /opt/base-audit-bot/backup.sh
Scaling Considerations
The bot is designed to run as a single instance. For high-volume deployments:
Use multiple instances with different contract filters
Implement distributed locking for database writes
Consider Redis for shared state
Use message queue for audit requests
Updating the Bot
Rebuild (Docker)
docker-compose down
docker-compose build
docker-compose up -d
Update dependencies (Python)
pip install -r requirements.txt --upgrade
Verify
docker-compose logs -f # or check Python logs
curl http://localhost:5000/health
Troubleshooting Deployment
See the Troubleshooting Guide for common deployment issues and solutions.