Skip to main content

Overview

Interview Simulator is containerized using Docker and can be deployed to any environment that supports Docker containers. The application uses Gunicorn as the production WSGI server.

Docker Deployment

1

Prepare environment variables

Create a .env file based on .env.example with your production credentials:
cp .env.example .env
Edit the .env file and set required values:
  • SECRET_KEY: A strong random secret for Flask sessions
  • GEMINI_API_KEY: Your Google Gemini API key (optional if using OpenRouter)
  • OPENROUTER_API_KEY: Your OpenRouter API key (optional if using Gemini)
  • FLASK_ENV: Set to production
  • DATABASE_URL: Database connection string (defaults to SQLite)
See Configuration for complete environment variable documentation.
2

Build the Docker image

The Dockerfile uses Python 3.14 slim base image and installs dependencies from requirements.txt:
docker build -t interview-simulator .
The image:
  • Installs Python dependencies
  • Copies application code to /app
  • Exposes port 8000
  • Runs Gunicorn with 4 workers
3

Run with Docker Compose

Use the provided docker-compose.yml for easy orchestration:
docker-compose up -d
This configuration:
  • Builds the image from the current directory
  • Maps port 8000 to the host
  • Loads environment variables from .env
  • Mounts ./instance directory for SQLite database persistence
4

Verify deployment

Check that the application is running:
curl http://localhost:8000
docker-compose logs -f

Production Considerations

WSGI Server

The application uses Gunicorn as the production WSGI server, configured in the Dockerfile:
CMD ["gunicorn", "wsgi:app", "--bind", "0.0.0.0:8000", "--workers", "4"]
The wsgi.py file at the project root serves as the entry point:
app/wsgi.py
from app import create_app

app = create_app()

if __name__ == '__main__':
    app.run(debug=True)
The if __name__ == '__main__' block with debug=True is only used for local development. In production, Gunicorn imports the app object directly and debug mode is disabled.

Worker Configuration

The default configuration uses 4 Gunicorn workers. Adjust based on your server resources:
# Custom worker count
gunicorn wsgi:app --bind 0.0.0.0:8000 --workers 8
Recommended formula: (2 * CPU_cores) + 1

Database Persistence

By default, the application uses SQLite with the database stored in the instance/ directory. The docker-compose configuration mounts this directory as a volume:
volumes:
  - ./instance:/app/instance
This ensures database persistence across container restarts. For production at scale, consider using PostgreSQL:
DATABASE_URL=postgresql://user:password@host:5432/dbname

Reverse Proxy

For production deployments, run the application behind a reverse proxy like Nginx:
server {
    listen 80;
    server_name your-domain.com;

    location / {
        proxy_pass http://localhost:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Security Checklist

1

Generate a strong SECRET_KEY

Never use the default dev-secret-key-change-in-production in production:
python -c 'import secrets; print(secrets.token_hex(32))'
2

Set FLASK_ENV=production

Ensures debug mode is disabled and optimizations are enabled.
3

Secure API keys

Store API keys in .env file (which is gitignored) or use a secrets manager.
4

Configure file upload limits

The default MAX_CONTENT_LENGTH is 16MB. Adjust if needed for your use case.
5

Enable HTTPS

Always use HTTPS in production. Configure SSL/TLS at the reverse proxy level.

Monitoring and Logs

View application logs:
# Follow logs
docker-compose logs -f web

# View last 100 lines
docker-compose logs --tail=100 web
Gunicorn logs include:
  • HTTP request logs (access log)
  • Application errors (error log)
  • Worker process information

Scaling

To scale horizontally:
  1. Use a shared database (PostgreSQL) instead of SQLite
  2. Deploy multiple container instances behind a load balancer
  3. Configure shared storage for uploads (S3, NFS, etc.)
  4. Use a centralized session store (Redis) if needed

Environment-Specific Builds

Create environment-specific Docker images:
# Production Dockerfile
FROM python:3.14-slim
WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

ENV FLASK_ENV=production
EXPOSE 8000

CMD ["gunicorn", "wsgi:app", "--bind", "0.0.0.0:8000", "--workers", "4", "--access-logfile", "-", "--error-logfile", "-"]

Build docs developers (and LLMs) love