Skip to main content
You can deploy FastAPI applications to virtually any cloud provider. This guide covers popular platforms and deployment strategies.

Cloud Deployment Options

Choose based on your needs:
  • Platform as a Service (PaaS) - Easiest, minimal configuration
  • Container Services - Flexible, Docker-based
  • Kubernetes - Maximum control, complex setup
  • Serverless - Pay per request, auto-scaling
  • Virtual Machines - Full control, more management
Start with PaaS for simplicity, then migrate to containers or Kubernetes as your needs grow.

Platform as a Service (PaaS)

PaaS platforms handle infrastructure, letting you focus on code.

Render

Render is a modern PaaS with excellent FastAPI support.
1

Create render.yaml

services:
  - type: web
    name: fastapi-app
    runtime: python
    buildCommand: pip install -r requirements.txt
    startCommand: uvicorn main:app --host 0.0.0.0 --port $PORT
    envVars:
      - key: PYTHON_VERSION
        value: 3.12.0
2

Connect Repository

Link your GitHub/GitLab repository in Render dashboard.
3

Deploy

Render automatically deploys on every git push.
Features:
  • Free tier available
  • Automatic HTTPS
  • Auto-deploy from git
  • Built-in monitoring
Render automatically provides HTTPS certificates and handles renewals.

Railway

Railway offers simple deployment with great developer experience.
1

Install Railway CLI

npm install -g @railway/cli
2

Login and Initialize

railway login
railway init
3

Deploy

railway up
railway.json (optional):
{
  "$schema": "https://railway.app/railway.schema.json",
  "build": {
    "builder": "NIXPACKS"
  },
  "deploy": {
    "startCommand": "uvicorn main:app --host 0.0.0.0 --port $PORT",
    "restartPolicyType": "ON_FAILURE",
    "restartPolicyMaxRetries": 10
  }
}
Features:
  • Generous free tier
  • PostgreSQL/Redis included
  • Environment variables management
  • Automatic HTTPS

Fly.io

Fly.io runs your app in containers close to your users.
1

Install Flyctl

curl -L https://fly.io/install.sh | sh
2

Launch App

flyctl launch
This creates fly.toml configuration.
3

Deploy

flyctl deploy
fly.toml:
app = "my-fastapi-app"
primary_region = "iad"

[build]
  builder = "paketobuildpacks/builder:base"

[env]
  PORT = "8000"

[http_service]
  internal_port = 8000
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 0

[[vm]]
  cpu_kind = "shared"
  cpus = 1
  memory_mb = 256
Features:
  • Global edge deployment
  • Automatic HTTPS
  • Scale to zero
  • Pay-as-you-go pricing
Fly.io is excellent for global applications - it runs containers in multiple regions close to your users.

Heroku

Heroku is a mature PaaS with extensive documentation.
1

Create Procfile

web: uvicorn main:app --host 0.0.0.0 --port $PORT
2

Create runtime.txt

python-3.12.0
3

Deploy

heroku login
heroku create my-fastapi-app
git push heroku main
Features:
  • Add-ons marketplace (databases, monitoring, etc.)
  • Automatic HTTPS
  • Easy scaling
  • CI/CD integration
Heroku removed its free tier in 2022. Consider Render or Railway for free hosting.

Container Services

Deploy Docker containers without managing orchestration.

AWS App Runner

App Runner automatically builds and deploys containerized applications.
1

Create Dockerfile

FROM python:3.12
WORKDIR /code
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY ./app ./app
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
2

Deploy via AWS Console

  1. Open App Runner console
  2. Create service from source code or ECR
  3. Configure port (8000)
  4. Deploy
apprunner.yaml:
version: 1.0
runtime: python3
build:
  commands:
    build:
      - pip install -r requirements.txt
run:
  command: uvicorn main:app --host 0.0.0.0 --port 8000
  network:
    port: 8000
Features:
  • Automatic scaling
  • Built-in load balancing
  • HTTPS included
  • Pay for what you use

Google Cloud Run

Cloud Run runs containers in a fully managed environment.
1

Create Dockerfile

FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD exec uvicorn main:app --host 0.0.0.0 --port $PORT
2

Build and Deploy

gcloud run deploy fastapi-app \
  --source . \
  --platform managed \
  --region us-central1 \
  --allow-unauthenticated
Features:
  • Scale to zero (no cost when idle)
  • Automatic HTTPS
  • Pay per request
  • Fast cold starts
Cloud Run automatically injects $PORT environment variable. Use it in your start command.

Azure Container Instances

ACI quickly deploys containers without managing servers.
# Build and push to Azure Container Registry
az acr build --registry myregistry --image fastapi-app:v1 .

# Deploy to Container Instances
az container create \
  --resource-group myResourceGroup \
  --name fastapi-app \
  --image myregistry.azurecr.io/fastapi-app:v1 \
  --dns-name-label my-fastapi-app \
  --ports 80
Features:
  • Simple container deployment
  • Per-second billing
  • Fast startup
  • Virtual network support

Kubernetes Platforms

Managed Kubernetes for production-grade deployments.

AWS EKS

Elastic Kubernetes Service runs Kubernetes on AWS. deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fastapi-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: fastapi
  template:
    metadata:
      labels:
        app: fastapi
    spec:
      containers:
      - name: fastapi
        image: your-account.dkr.ecr.us-east-1.amazonaws.com/fastapi-app:latest
        ports:
        - containerPort: 8000
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "200m"
---
apiVersion: v1
kind: Service
metadata:
  name: fastapi-service
spec:
  type: LoadBalancer
  selector:
    app: fastapi
  ports:
  - port: 80
    targetPort: 8000
Deploy:
kubectl apply -f deployment.yaml

Google Kubernetes Engine (GKE)

GKE offers managed Kubernetes with Autopilot mode.
# Create cluster
gcloud container clusters create-auto fastapi-cluster --region=us-central1

# Deploy
kubectl apply -f deployment.yaml

# Expose with LoadBalancer
kubectl expose deployment fastapi-app --type=LoadBalancer --port=80 --target-port=8000
Features:
  • Autopilot mode (fully managed)
  • Auto-scaling
  • Auto-repair
  • Integrated monitoring

Azure Kubernetes Service (AKS)

AKS provides managed Kubernetes on Azure.
# Create cluster
az aks create \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --node-count 2 \
  --enable-addons monitoring \
  --generate-ssh-keys

# Get credentials
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

# Deploy
kubectl apply -f deployment.yaml
Kubernetes has a steep learning curve. Start with simpler options unless you need advanced orchestration features.

Serverless Platforms

Run FastAPI without managing servers, paying only for actual usage.

AWS Lambda with Mangum

Mangum is an adapter for running ASGI apps on AWS Lambda. Installation:
pip install mangum
main.py:
from fastapi import FastAPI
from mangum import Mangum

app = FastAPI()

@app.get("/")
def read_root():
    return {"Hello": "World"}

@app.get("/items/{item_id}")
def read_item(item_id: int):
    return {"item_id": item_id}

# Lambda handler
handler = Mangum(app)
Deploy with AWS SAM: template.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  FastAPIFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .
      Handler: main.handler
      Runtime: python3.12
      Timeout: 30
      Events:
        ApiEvent:
          Type: HttpApi
          Properties:
            Path: /{proxy+}
            Method: ANY
Deploy:
sam build
sam deploy --guided
Lambda has a cold start delay. Consider provisioned concurrency for latency-sensitive applications.

Vercel

Vercel supports Python serverless functions. api/index.py:
from fastapi import FastAPI
from mangum import Mangum

app = FastAPI()

@app.get("/api")
def read_root():
    return {"Hello": "World"}

handler = Mangum(app)
vercel.json:
{
  "builds": [
    {
      "src": "api/index.py",
      "use": "@vercel/python"
    }
  ],
  "routes": [
    {
      "src": "/(.*)",
      "dest": "api/index.py"
    }
  ]
}
Deploy:
vercel
Limitations:
  • 10 second timeout on hobby tier
  • Limited CPU/memory
  • Best for simple APIs

Virtual Machines

Full control with self-managed infrastructure.

DigitalOcean Droplet

1

Create Droplet

Create Ubuntu 22.04 droplet via DigitalOcean console.
2

SSH and Setup

ssh root@your-droplet-ip

# Update system
apt update && apt upgrade -y

# Install Python and dependencies
apt install python3.12 python3-pip nginx -y

# Create app user
useradd -m -s /bin/bash appuser
3

Deploy Application

# Clone your code
cd /home/appuser
git clone https://github.com/yourusername/your-app.git
cd your-app

# Install dependencies
pip install -r requirements.txt

# Install Gunicorn
pip install gunicorn
4

Setup Systemd Service

Create /etc/systemd/system/fastapi.service:
[Unit]
Description=FastAPI Application
After=network.target

[Service]
User=appuser
WorkingDirectory=/home/appuser/your-app
ExecStart=/usr/local/bin/gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app --bind 0.0.0.0:8000
Restart=always

[Install]
WantedBy=multi-user.target
Enable and start:
systemctl enable fastapi
systemctl start fastapi
5

Configure Nginx

Create /etc/nginx/sites-available/fastapi:
server {
    listen 80;
    server_name your-domain.com;
    
    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}
Enable:
ln -s /etc/nginx/sites-available/fastapi /etc/nginx/sites-enabled/
nginx -t
systemctl restart nginx
6

Setup HTTPS

apt install certbot python3-certbot-nginx -y
certbot --nginx -d your-domain.com
This setup gives you full control but requires manual security updates and maintenance.

Database Integration

Most cloud platforms offer managed database services.

PostgreSQL Options

  • AWS: RDS, Aurora
  • Google Cloud: Cloud SQL
  • Azure: Azure Database for PostgreSQL
  • Neon: Serverless Postgres
  • Supabase: Open-source Firebase alternative
  • PlanetScale: MySQL-compatible

Example Connection

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import os

DATABASE_URL = os.getenv("DATABASE_URL")
engine = create_engine(DATABASE_URL)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

@app.on_event("startup")
async def startup():
    # Test connection
    with engine.connect() as conn:
        conn.execute("SELECT 1")
Environment Variables:
# Render, Railway, etc.
DATABASE_URL=postgresql://user:pass@host:5432/dbname
Use connection pooling and environment variables for database credentials. Never hardcode credentials.

Environment Variables

Manage configuration across environments.

Using python-dotenv

pip install python-dotenv
.env:
DATABASE_URL=postgresql://localhost/dbname
SECRET_KEY=your-secret-key
DEBUG=False
main.py:
from dotenv import load_dotenv
import os

load_dotenv()

DATABASE_URL = os.getenv("DATABASE_URL")
SECRET_KEY = os.getenv("SECRET_KEY")
DEBUG = os.getenv("DEBUG", "False") == "True"

Platform-Specific

Render:
# Set in dashboard or render.yaml
envVars:
  - key: DATABASE_URL
    value: postgresql://...
Railway:
railway variables set DATABASE_URL="postgresql://..."
Kubernetes:
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
stringData:
  DATABASE_URL: postgresql://...
  SECRET_KEY: your-secret-key
Never commit .env files or secrets to git. Add .env to .gitignore.

Monitoring and Logging

Application Performance Monitoring

Sentry for error tracking:
pip install sentry-sdk[fastapi]
import sentry_sdk
from sentry_sdk.integrations.fastapi import FastApiIntegration

sentry_sdk.init(
    dsn="your-sentry-dsn",
    integrations=[FastApiIntegration()],
    traces_sample_rate=1.0,
)

app = FastAPI()

Structured Logging

import logging
import sys

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
    handlers=[logging.StreamHandler(sys.stdout)]
)

logger = logging.getLogger(__name__)

@app.get("/")
def read_root():
    logger.info("Root endpoint accessed")
    return {"Hello": "World"}

Cloud Platform Monitoring

  • AWS: CloudWatch
  • Google Cloud: Cloud Logging, Cloud Monitoring
  • Azure: Application Insights
  • Render: Built-in metrics
  • Railway: Built-in logs

Recap

Choosing a cloud platform:
1

Start Simple

PaaS (Render, Railway, Fly.io) for quick deployment with minimal configuration.
2

Scale with Containers

Container Services (Cloud Run, App Runner) when you need more control but managed infrastructure.
3

Advanced Needs

Kubernetes (EKS, GKE, AKS) for complex applications requiring advanced orchestration.
4

Serverless for Spikes

Lambda/Serverless for infrequent usage or extreme cost optimization.
Key considerations:
  • Cost: PaaS vs. IaaS pricing models
  • Scaling: Auto-scaling capabilities
  • HTTPS: Automatic certificate management
  • Databases: Managed vs. self-hosted
  • Monitoring: Built-in vs. third-party
  • Complexity: Learning curve and maintenance
Most applications start well on PaaS platforms. Only move to complex solutions when you have specific needs that justify the additional complexity.

Build docs developers (and LLMs) love