Core Requirements
Python Environment
LangShazam requires Python 3.9 or higher. The official Docker image uses Python 3.9-slim.
backend/deployment/docker/Dockerfile:1:
Python Dependencies
The following packages are required (frombackend/requirements.txt):
- FastAPI: Modern web framework for building APIs
- Uvicorn: ASGI server for running the application
- OpenAI SDK: Integration with Whisper API for language detection
- WebSockets: Real-time communication support
- python-multipart: File upload handling
- psutil: System metrics monitoring
Platform-Specific Requirements
Docker Deployment
Docker Engine
Version 20.10 or higher
Docker Compose
Version 2.0 or higher (optional)
- CPU: 1 core (2+ recommended)
- RAM: 512 MB minimum, 1 GB recommended
- Disk: 2 GB for image and logs
- Network: Port 10000 available
Kubernetes Deployment
Prerequisites:- Kubernetes cluster v1.24+
kubectlCLI configured- Docker registry access (ECR, Docker Hub, etc.)
- Ingress controller (nginx recommended)
kubernetes/manifests/deployment.yaml:31-36):
Per pod:
- 2 replicas (default) × 500m CPU = 1 vCPU minimum
- 2 replicas × 512 MB = 1 GB RAM minimum
- Auto-scaling up to 10 replicas requires 5 vCPU and 5 GB RAM
EC2 Deployment
AWS Account Requirements:- EC2 permissions
- CloudFormation permissions
- VPC and Security Group creation permissions
- SSH key pair
ec2/ec2-config.yaml:14):
| Instance Type | vCPU | RAM | Cost/Month | Best For |
|---|---|---|---|---|
| t2.micro | 1 | 1 GB | ~$8.50 | Development |
| t2.small | 1 | 2 GB | ~$17 | Light production |
| t2.medium | 2 | 4 GB | ~$34 | Production |
| t3.micro | 2 | 1 GB | ~$7.50 | Burst workloads |
| t3.small | 2 | 2 GB | ~$15 | Production |
- Amazon Linux 2
- Docker Engine
- Docker Compose v2.24.5+
- Git
- jq
Render Deployment
Render manages all infrastructure automatically. No manual setup required.
- 512 MB RAM
- Shared CPU
- Automatic sleep after 15 minutes of inactivity
- Cold start time: 30-60 seconds
- Starting at $7/month
- Dedicated resources
- No automatic sleep
- Faster deployment and scaling
API Keys and Secrets
OpenAI API Key (Required)
How to get an OpenAI API key:- Sign up at platform.openai.com
- Navigate to API keys section
- Create a new secret key
- Ensure your account has credit/billing enabled
- $0.006 per minute of audio
- Example: 1000 detections × 5 seconds = ~$0.50
Base64 Encoding for Kubernetes
For Kubernetes deployments, encode your API key:kubernetes/manifests/secrets.yaml:9:
Network Requirements
Ports
| Port | Protocol | Purpose | Required |
|---|---|---|---|
| 10000 | HTTP/WS | Application server | Yes |
| 80 | HTTP | Web traffic (with proxy) | Optional |
| 443 | HTTPS | Secure traffic (with SSL) | Optional |
| 22 | SSH | Server access (EC2 only) | EC2 only |
Outbound Connectivity
Firewall Rules
For EC2 (fromec2/ec2-config.yaml:91-107):
Storage Requirements
Docker Volumes
Fromec2/docker-compose.yml:40:
- Logs: ~100 MB per day (varies with traffic)
- Docker image: ~200 MB
Kubernetes Persistent Storage
Not required for stateless deployments, but recommended for:- Log persistence
- Metrics storage
- Application state (if adding future features)
Health Check Requirements
All deployments include health checks on the root endpoint. Frombackend/deployment/docker/Dockerfile:17-18:
Optional Requirements
SSL/TLS Certificates
For production deployments with custom domains:- Let’s Encrypt: Free, auto-renewable certificates
- AWS ACM: Free for AWS resources
- Custom certificates: Can be mounted in containers
Monitoring and Observability
While not required, consider:- Metrics endpoint: Available at
/metrics(built-in) - Log aggregation: CloudWatch, ELK, Datadog
- APM: New Relic, Datadog, Prometheus
Next Steps
Environment Variables
Configure required environment variables
Choose Deployment
Select your deployment platform

