Skip to main content
LangShazam can be deployed to various platforms depending on your infrastructure needs, scalability requirements, and operational expertise.

Deployment Options

LangShazam supports multiple deployment strategies:

Docker

Containerized deployment for any environment

Kubernetes

Production-grade orchestration with auto-scaling

AWS EC2

Traditional server deployment with CloudFormation

Render

One-click deployment to managed platform

Comparison Matrix

FeatureDockerKubernetesEC2Render
ComplexityLowHighMediumVery Low
Auto-scalingManualAutomatic (HPA)ManualAutomatic
CostInfrastructure onlyInfrastructure + overheadEC2 + EBS costsPay-per-use
Setup Time5 minutes30-60 minutes15-30 minutes2 minutes
Best ForDevelopment, testingProduction at scaleCustom infrastructureQuick deployments
Load BalancingManualBuilt-inNginx includedAutomatic
SSL/HTTPSManualcert-managerLet’s EncryptAutomatic

Choosing the Right Option

Use Docker When:

  • Running locally for development
  • Deploying to any cloud provider or VPS
  • You need maximum portability
  • You want simple container management

Use Kubernetes When:

  • You need production-grade scalability
  • You require automatic failover and healing
  • You’re serving high-traffic applications
  • You have DevOps expertise

Use EC2 When:

  • You prefer AWS infrastructure
  • You need custom networking or security groups
  • You want full control over the server
  • You’re using other AWS services

Use Render When:

  • You want the fastest deployment
  • You prefer managed infrastructure
  • You’re prototyping or building an MVP
  • You don’t want to manage servers

Architecture Overview

All deployment options run the same core components:
┌─────────────────────────────────────────┐
│           Client Application            │
│      (Browser/Mobile WebSocket)         │
└──────────────────┬──────────────────────┘

                   │ WSS Connection

┌──────────────────▼──────────────────────┐
│         LangShazam Backend              │
│  ┌────────────────────────────────┐    │
│  │  FastAPI + Uvicorn Server      │    │
│  │  - WebSocket Handler           │    │
│  │  - Audio Processing            │    │
│  │  - CORS Middleware             │    │
│  └────────────┬───────────────────┘    │
│               │                         │
│  ┌────────────▼───────────────────┐    │
│  │     OpenAI Whisper API         │    │
│  │  (Language Detection Service)  │    │
│  └────────────────────────────────┘    │
└─────────────────────────────────────────┘

System Requirements

Before deploying, ensure you meet the system requirements.

Environment Configuration

All deployments require proper environment variables and CORS setup.

Next Steps

1

Review Requirements

Check the system requirements for your chosen platform
2

Configure Environment

Set up environment variables including your OpenAI API key
3

Deploy

Follow the deployment guide for your chosen platform
4

Configure CORS

Set up CORS for your frontend domain

Build docs developers (and LLMs) love