Overview
Scribe Backend is designed for self-hosted deployment on resource-constrained hardware like a Raspberry Pi, with traffic routed through a Cloudflare Tunnel for secure public access without port forwarding or static IP requirements.This guide covers the production deployment architecture used for Scribe, including systemd service configuration, Cloudflare Tunnel setup, and optimization for low-memory environments.
Deployment Architecture
- Cloudflare Tunnel: Secure outbound-only connection to Cloudflare’s edge network
- FastAPI Server: HTTP API exposed on port 8000 (internal)
- Celery Worker: Background task processor with
concurrency=1 - Redis: Local message broker and result backend
- Supabase: Managed PostgreSQL database with transaction pooler
Production Hardware
Current Deployment Specs
Raspberry Pi 3B+:- SoC: Broadcom BCM2837B0, quad-core Cortex-A53 (ARMv8) 64-bit @ 1.4GHz
- RAM: 1GB LPDDR2 SDRAM
- Networking: Gigabit Ethernet (max ~300Mb/s via USB 2.0), dual-band 802.11ac Wi-Fi
- Storage: Micro-SD card (64GB+ recommended)
- OS + System: ~200-300MB
- FastAPI + Uvicorn: ~80MB
- Redis: ~30MB
- Celery Worker (idle): ~50MB
- Celery Worker (task running): ~400-500MB (Playwright browser active)
- Headroom: ~340-440MB
With
concurrency=1, Scribe processes 3-4 emails per minute, which is sufficient for most academic outreach workflows. Upgrade to Pi 4 (4GB) or Pi 5 (8GB) for higher throughput.Cloudflare Tunnel Setup
Why Cloudflare Tunnel?
Benefits:- No port forwarding or firewall configuration
- No static IP required
- Automatic SSL/TLS (HTTPS)
- DDoS protection via Cloudflare’s network
- Zero-trust security model (outbound-only connections)
Prerequisites
- Cloudflare Account: Free tier is sufficient
- Domain: Add your domain to Cloudflare (DNS managed by Cloudflare)
- Raspberry Pi: Running 64-bit Raspberry Pi OS
Step 1: Install Cloudflared
Authenticate with Cloudflare
Configure tunnel routing
Create a configuration file at Replace
~/.cloudflared/config.yml:scribeapi.yourdomain.com with your desired subdomain.Step 2: Run Cloudflare Tunnel
Test the tunnel:Production Environment Variables
Create a production.env file with secure credentials:
Systemd Services
Create systemd service files to manage FastAPI and Celery as background services.FastAPI Service
Create/etc/systemd/system/scribe-api.service:
--timeout-keep-alive 180: Long timeout for polling clients--workers 1: Single worker process (sufficient for Raspberry Pi)Restart=on-failure: Auto-restart on crashes
Celery Worker Service
Create/etc/systemd/system/scribe-celery.service:
--concurrency=1: Single task at a time (memory constraint)--pool=solo: Single-threaded execution (most memory-efficient)--max-tasks-per-child=100: Restart worker after 100 tasks (prevent memory leaks)MemoryMax=768M: Hard memory limit (systemd kills process if exceeded)
Redis Service
Redis is typically installed via package manager and runs as a system service:Enable and Start Services
Build Script
Create abuild.sh script for deployment automation:
Monitoring and Logs
View Service Logs
Health Monitoring
Set up a cron job to monitor the/health endpoint:
Logfire Observability
IfLOGFIRE_TOKEN is configured, view production traces at:
Metrics tracked:
- Request latency and throughput
- Celery task execution time
- LLM API call costs and tokens
- Database query performance
- Error rates and stack traces
Performance Optimization
Raspberry Pi Tuning
Increase swap space for memory overhead:Database Connection Pooling
Scribe uses NullPool with Supabase’s transaction pooler (port 6543):Transaction pooler handles connection pooling server-side. NullPool avoids stale connection issues.
Scaling Recommendations
| Hardware | Concurrency | Throughput | Use Case |
|---|---|---|---|
| Pi 3B+ (1GB) | 1 | ~3-4 emails/min | Development, low-volume production |
| Pi 4 (2GB) | 1 | ~6 emails/min | Small-scale production |
| Pi 4 (4GB) | 2 | ~12 emails/min | Medium-scale production |
| Pi 5 (8GB) | 4 | ~24 emails/min | High-volume production |
| Cloud (2vCPU, 4GB) | 4 | ~30 emails/min | Enterprise scale |
-
Increase Celery concurrency:
-
Add more workers:
-
Upgrade hardware:
- Raspberry Pi 5 (8GB RAM)
- VPS with 2-4 vCPUs and 4-8GB RAM
Backup and Recovery
Automated Database Backups
Supabase provides automatic daily backups. For manual backups:Application State Backup
Redis (task queue state):Security Best Practices
Firewall Configuration
Firewall Configuration
Only expose Cloudflare Tunnel (no inbound ports):Cloudflare Tunnel uses outbound connections only (ports 80/443).
Environment Variable Security
Environment Variable Security
Protect Never commit to version control:
.env file:Service User Isolation
Service User Isolation
Run services as non-root user:
HTTPS Enforcement
HTTPS Enforcement
Cloudflare Tunnel enforces HTTPS by default. Verify in Cloudflare dashboard:
- SSL/TLS → Overview → Full (strict)
- Always Use HTTPS → On
Troubleshooting Production Issues
Service won't start
Service won't start
Check service status:Common causes:
- Missing
.envfile or invalid credentials - Port 8000 already in use
- Database connection failure
Out of memory crashes
Out of memory crashes
Symptoms: Worker killed with no error messageSolutions:
- Verify
concurrency=1in systemd service - Add swap space (see Performance Optimization)
- Set memory limits in systemd:
MemoryMax=768M - Monitor with:
htoporfree -h
Cloudflare Tunnel disconnects
Cloudflare Tunnel disconnects
Check tunnel status:Restart tunnel:Ensure credentials file exists:
High latency or slow responses
High latency or slow responses
Diagnose:
- Check CPU usage:
htop - Check network:
ping 8.8.8.8 - Check database latency: Supabase dashboard
- Review Logfire traces for bottlenecks
- Increase
timeout-keep-alivein uvicorn command - Reduce LLM temperature for faster responses
- Use faster LLM model (Haiku instead of Sonnet)
Deployment Checklist
Prepare environment
- Fresh Raspberry Pi OS installation
- Python 3.13+ installed
- Git repository cloned
- Virtual environment created
Configure services
-
.envfile with production credentials - Database migrations applied
- Redis installed and running
- Playwright browsers installed
Set up Cloudflare Tunnel
- Domain added to Cloudflare
- Tunnel created and configured
- DNS record created
- Tunnel service running
Configure systemd services
-
scribe-api.servicecreated -
scribe-celery.servicecreated - Services enabled and started
- Logs verified
Verify deployment
- Health check:
curl https://scribeapi.yourdomain.com/health - API docs:
https://scribeapi.yourdomain.com/docs - Generate test email successfully
- Monitor logs for errors
Production URL
The official Scribe Backend production deployment: API Base URL:https://scribeapi.manitmishra.com
Endpoints:
- Health:
https://scribeapi.manitmishra.com/health - API Docs:
https://scribeapi.manitmishra.com/docs
