Overview
The simplest way to deploy Vega AI is using Docker. The official Docker images are hosted on GitHub Container Registry and support both AMD64 and ARM64 architectures (including Apple Silicon).
Prerequisites
Quick Start
Create configuration file
Create a directory for Vega AI and a configuration file: # Create a directory for Vega AI
mkdir vega-ai && cd vega-ai
# Create a config file with your Gemini API key
echo "GEMINI_API_KEY=your-gemini-api-key" > config
Run the container
Start Vega AI with a single command: docker run --pull always -d \
--name vega-ai \
-p 8765:8765 \
-v vega-data:/app/data \
--env-file config \
ghcr.io/benidevo/vega-ai:latest
Access the application
Visit http://localhost:8765
Log in with default credentials:
Username: admin
Password: VegaAdmin
Important: Change your password after first login via Settings → Account
Docker Image Details
Multi-Architecture Support
The Docker images support both AMD64 and ARM64 architectures:
# Works on both Intel/AMD and ARM processors (including Apple Silicon)
docker pull ghcr.io/benidevo/vega-ai:latest
latest - Latest stable release
v*.*.* - Specific version tags (e.g., v1.0.0)
main - Latest development build from the main branch
Configuration
Minimal Configuration
Only the GEMINI_API_KEY is required:
GEMINI_API_KEY = your-gemini-api-key
Recommended Configuration
For production deployments, also set a custom JWT token secret:
GEMINI_API_KEY = your-gemini-api-key
TOKEN_SECRET = your-super-secret-jwt-key-here
Generate a secure token secret using: openssl rand -base64 32
Custom Admin Credentials
Override default admin credentials:
GEMINI_API_KEY = your-gemini-api-key
TOKEN_SECRET = your-super-secret-jwt-key-here
ADMIN_USERNAME = myadmin
ADMIN_PASSWORD = MySecurePassword123
Docker Run Options
Basic Deployment
docker run -d \
--name vega-ai \
-p 8765:8765 \
-v vega-data:/app/data \
--env-file config \
ghcr.io/benidevo/vega-ai:latest
Custom Port
docker run -d \
--name vega-ai \
-p 3000:8765 \
-v vega-data:/app/data \
--env-file config \
ghcr.io/benidevo/vega-ai:latest
Access at http://localhost:3000
Custom Data Directory
Use a bind mount instead of a named volume:
docker run -d \
--name vega-ai \
-p 8765:8765 \
-v /path/to/data:/app/data \
--env-file config \
ghcr.io/benidevo/vega-ai:latest
With Restart Policy
docker run -d \
--name vega-ai \
--restart unless-stopped \
-p 8765:8765 \
-v vega-data:/app/data \
--env-file config \
ghcr.io/benidevo/vega-ai:latest
Container Management
View Logs
# Follow logs in real-time
docker logs -f vega-ai
# View last 100 lines
docker logs --tail 100 vega-ai
Stop Container
Start Container
Restart Container
Remove Container
# Stop and remove
docker stop vega-ai
docker rm vega-ai
Update to Latest Version
# Pull latest image
docker pull ghcr.io/benidevo/vega-ai:latest
# Stop and remove old container
docker stop vega-ai
docker rm vega-ai
# Start new container with same configuration
docker run -d \
--name vega-ai \
--restart unless-stopped \
-p 8765:8765 \
-v vega-data:/app/data \
--env-file config \
ghcr.io/benidevo/vega-ai:latest
Your data persists in the vega-data volume, so updates won’t affect your data.
Data Persistence
Vega AI stores all data in /app/data inside the container:
Database : /app/data/vega.db (SQLite with WAL mode)
Cache : /app/data/cache/ (temporary AI generation cache)
Always mount a volume to /app/data to persist your data across container restarts and updates.
Backup and Restore
Backup Data
# Create backup directory
mkdir -p backups
# Copy data from container
docker cp vega-ai:/app/data/vega.db backups/vega- $( date +%Y%m%d ) .db
Or with named volume:
# Create a temporary container to access the volume
docker run --rm \
-v vega-data:/data \
-v $( pwd ) /backups:/backup \
alpine \
cp /data/vega.db /backup/vega- $( date +%Y%m%d ) .db
Restore Data
# Stop the container
docker stop vega-ai
# Copy backup to volume
docker run --rm \
-v vega-data:/data \
-v $( pwd ) /backups:/backup \
alpine \
cp /backup/vega-20260305.db /data/vega.db
# Start the container
docker start vega-ai
Troubleshooting
Container Won’t Start
Check logs for errors:
Verify environment variables:
docker inspect vega-ai | grep -A 10 Env
Check if port is already in use:
netstat -tulpn | grep 8765
Permission Issues
The container runs as a non-root user (UID 1001). If using bind mounts, ensure proper permissions:
sudo chown -R 1001:1001 /path/to/data
Database Locked Errors
If you see “database is locked” errors:
Stop the container
Check for stale lock files:
docker run --rm -v vega-data:/data alpine ls -la /data/
Remove any .db-shm or .db-wal files if corrupted
Restart the container
Health Checks
Check if Vega AI is running properly:
# Check container status
docker ps | grep vega-ai
# Check application health (if health endpoint exists)
curl http://localhost:8765/
# Monitor resource usage
docker stats vega-ai
Next Steps
Docker Compose Use Docker Compose for easier management
Environment Variables Complete environment variable reference
Docker Swarm Deploy in Docker Swarm mode for high availability