Overview
This guide covers deploying the e-commerce API to production using Docker containers. The application supports both standalone Docker deployment and orchestrated multi-service deployment with Docker Compose.
Docker Deployment
Dockerfile Architecture
The application uses Ubuntu 22.04 as the base image with Python 3.10 support.
Build Docker Image
Build the Docker image from the project root: docker build -t ecommerce-api:latest .
The build process:
Installs Python 3.10 and system dependencies
Installs MySQL client libraries
Copies application files to /app
Installs Python dependencies from requirement.txt
Installs OpenTelemetry for observability
Configures gunicorn with 8 workers
Run Docker Container
Run the container with environment configuration: docker run -d \
--name ecommerce-api \
-p 8001:8001 \
--env-file .env \
ecommerce-api:latest
Verify Container Health
Check container logs and health: # View logs
docker logs ecommerce-api
# Check health endpoint
curl http://localhost:8001/api/v2/health
Dockerfile Configuration
The production Dockerfile includes: FROM ubuntu:22.04
# Install Python 3.10
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update && apt-get install -y python3.10 python3.10-dev python3.10-distutils python3-pip
# Install system dependencies
RUN apt-get install -y build-essential libffi-dev libssl-dev libxml2-dev \
libjpeg-dev libcurl4-openssl-dev g++ vim net-tools libmysqlclient-dev
# Configure timezone (Asia/Kolkata)
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y tzdata
RUN ln -fs /usr/share/zoneinfo/Asia/Kolkata /etc/localtime && \
dpkg-reconfigure --frontend noninteractive tzdata
# Create Python 3.10 symlinks
RUN ln -sf /usr/bin/python3.10 /usr/bin/python3
RUN ln -sf /usr/bin/python3.10 /usr/bin/python
# Set up application directory
RUN mkdir /app
COPY ./ /app/
WORKDIR /app
# Configure environment
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
ENV PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
# Install Python dependencies
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install setuptools wheel
RUN python3 -m pip install -r requirement.txt
# Install OpenTelemetry
RUN python3 -m pip install opentelemetry-distro opentelemetry-exporter-otlp && \
opentelemetry-bootstrap -a install && \
python3 -m pip install grpcio==1.78.0
# Start with OpenTelemetry instrumentation
CMD [ "opentelemetry-instrument" , "gunicorn" , "main:app" , "-b" , "0.0.0.0:8001" , "-w" , "8" , "-t" , "8" , "--timeout" , "1200" , "--log-level" , "debug" ]
EXPOSE 8001
Docker Compose Deployment
Full Stack Deployment
Deploy the complete application stack with all dependencies:
docker-compose -f docker-compose.python310.yml up -d
Services Architecture
The Docker Compose setup includes:
API Service
Celery Worker
Redis Cache
MongoDB
services :
tss-api-python310 :
build :
context : .
dockerfile : Dockerfile_python310
container_name : tss-api-python310
ports :
- "8001:8001"
environment :
- PYTHONPATH=/app
- ENV=staging
volumes :
- .:/app
- /app/__pycache__
depends_on :
- redis
- mongodb
networks :
- tss-network
restart : unless-stopped
healthcheck :
test : [ "CMD" , "curl" , "-f" , "http://localhost:8001/api/v2/health" ]
interval : 30s
timeout : 10s
retries : 3
start_period : 40s
Neo4j Graph Database (Optional)
For advanced graph-based features, deploy Neo4j:
docker-compose up -d neo4j
services :
neo4j :
image : neo4j:5.21
container_name : neo4j-cgc
restart : unless-stopped
ports :
- "7474:7474" # HTTP
- "7687:7687" # Bolt
environment :
- NEO4J_AUTH=neo4j/password
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
volumes :
- neo4j_data:/data
- neo4j_logs:/logs
Access Neo4j Browser at http://localhost:7474
Flower Monitoring
Monitor Celery tasks with Flower:
services :
flower :
image : mher/flower:2.0
container_name : tss-flower
ports :
- "5555:5555"
environment :
- CELERY_BROKER_URL=redis://redis:6379/0
- FLOWER_PORT=5555
depends_on :
- redis
- tss-celery-python310
networks :
- tss-network
restart : unless-stopped
Access Flower dashboard at http://localhost:5555
Environment Variables for Production
Never commit .env files with production credentials to version control. Use secret management services or environment variable injection.
Required Production Variables
# Application
PORT = 8001
ENV = production
API_HOST = https://api.yourdomain.com
# Database - MySQL
HOST = your-mysql-host
DBPORT = 3306
USER = your-db-user
PASSWORD = your-secure-password
DATABASE = your-database-name
POOL_SIZE = 10
MAX_OVERFLOW = 50
# Database - MongoDB
MONGO_HOST = your-mongodb-host
MONGO_PORT = 27017
MONGO_DATABASE = tss
MONGO_USER = your-mongo-user
MONGO_PASSWORD = your-mongo-password
# Cache - Redis
REDIS_HOST = your-redis-host
REDIS_PORT = 6379
BEAKER_REDIS_URL = your-redis-host:6379
# Security
SECRET_KEY = your-256-bit-secret-key
JWT_EXPIRY_DAY = 30
# AWS Services
AWS_ACCESS_KEY = your-aws-access-key
AWS_SECRET_KEY = your-aws-secret-key
AWS_REGION = ap-south-1
See Configuration Reference for all variables.
Production Server Configuration
Gunicorn Production Settings
Optimal production configuration:
gunicorn main:app \
--bind 0.0.0.0:8001 \
--workers 8 \
--threads 8 \
--timeout 1200 \
--log-level info \
--access-logfile /var/log/gunicorn/access.log \
--error-logfile /var/log/gunicorn/error.log \
--capture-output \
--enable-stdio-inheritance
Worker Configuration
Workers : 2 * CPU_CORES + 1 (e.g., 8 workers for 4 cores)
Threads : 8 per worker for I/O-bound operations
Timeout : 1200 seconds minimum for long-running tasks
Worker Class : sync (default) for CPU-bound, gevent for I/O-bound
Calculate Optimal Workers
import multiprocessing
# For CPU-bound applications
workers = multiprocessing.cpu_count() * 2 + 1
# For I/O-bound applications (APIs, databases)
workers = multiprocessing.cpu_count() * 4 + 1
Celery Production Configuration
For production Celery workers:
celery -A tasks worker \
--loglevel=info \
--concurrency=8 \
--pool=eventlet \
--max-tasks-per-child=1000 \
--time-limit=3600 \
--soft-time-limit=3000
Queue Configuration
The application uses multiple queues:
celery - Default queue
callback - Customer callback requests
cancel-order - Order cancellation processing
fb-events-queue - Facebook events
kinesis-order - Order stream to AWS Kinesis
kinesis-product - Product stream to AWS Kinesis
OpenTelemetry Integration
The application includes OpenTelemetry instrumentation for observability:
# Automatic instrumentation
opentelemetry-instrument gunicorn main:app -b 0.0.0.0:8001 -w 8
Configure exporter endpoints via environment variables:
OTEL_EXPORTER_OTLP_ENDPOINT = https://your-collector:4317
OTEL_SERVICE_NAME = ecommerce-api
OTEL_ENVIRONMENT = production
Health Check Endpoints
Primary Health Check
Response:
{
"status" : "healthy" ,
"timestamp" : "2026-03-08T12:00:00Z" ,
"services" : {
"database" : "up" ,
"redis" : "up" ,
"mongodb" : "up"
}
}
Container Health Checks
Docker health checks are configured for all services:
API : curl -f http://localhost:8001/api/v2/health
Celery : celery -A tasks inspect ping
Redis : redis-cli ping
MongoDB : mongosh --eval "db.adminCommand('ping')"
Load Balancer Configuration
For load balancers (Nginx, HAProxy), configure health checks:
upstream api_backend {
server api-server-1:8001 max_fails = 3 fail_timeout=30s;
server api-server-2:8001 max_fails = 3 fail_timeout=30s;
# Health check
check interval=10000 rise=2 fall=3 timeout=5000 type=http;
check_http_send "GET /api/v2/health HTTP/1.0\r \n \r \n " ;
check_http_expect_alive http_2xx http_3xx;
}
Deployment Commands
Deploy Full Stack
# Pull latest images
docker-compose pull
# Build and start services
docker-compose -f docker-compose.python310.yml up -d --build
# View logs
docker-compose logs -f tss-api-python310
# Check service status
docker-compose ps
Update Deployment
# Pull latest code
git pull origin main
# Rebuild containers
docker-compose -f docker-compose.python310.yml build
# Rolling update (zero downtime)
docker-compose -f docker-compose.python310.yml up -d --no-deps --build tss-api-python310
# Restart Celery workers
docker-compose restart tss-celery-python310
Scale Services
# Scale API workers
docker-compose -f docker-compose.python310.yml up -d --scale tss-api-python310= 3
# Scale Celery workers
docker-compose -f docker-compose.python310.yml up -d --scale tss-celery-python310= 4
Monitoring and Logging
Container Logs
# View all logs
docker-compose logs -f
# View specific service
docker-compose logs -f tss-api-python310
# View last 100 lines
docker-compose logs --tail=100 tss-api-python310
Persistent Logging
Configure volume mounts for logs:
volumes :
- ./logs/gunicorn:/var/log/gunicorn
- ./logs/application:/tmp
Backup and Recovery
Database Backups
MongoDB Backup:
docker exec tss-mongodb mongodump --out /backup/ $( date +%Y%m%d )
Redis Backup:
docker exec tss-redis redis-cli BGSAVE
docker cp tss-redis:/data/dump.rdb ./backups/redis- $( date +%Y%m%d ) .rdb
Volume Management
# List volumes
docker volume ls
# Backup volume
docker run --rm -v redis_data:/data -v $( pwd ) :/backup ubuntu tar czf /backup/redis-data-backup.tar.gz /data
# Restore volume
docker run --rm -v redis_data:/data -v $( pwd ) :/backup ubuntu tar xzf /backup/redis-data-backup.tar.gz -C /
Troubleshooting
Check logs for errors: docker-compose logs tss-api-python310
Common issues:
Port 8001 already in use
Database connection failure
Missing environment variables
Debug the health endpoint: docker exec -it tss-api-python310 curl http://localhost:8001/api/v2/health
Check service dependencies are running:
Monitor resource usage: Adjust worker count or add memory limits: deploy :
resources :
limits :
memory : 2G
reservations :
memory : 1G
Next Steps
Configure Environment Variables for your environment
Set up monitoring and alerting
Configure SSL/TLS certificates
Implement CI/CD pipelines
Set up automated backups