Skip to main content

Overview

Scribe Backend consists of three main services that work together:
  1. FastAPI Server (uvicorn) - Handles HTTP requests and API endpoints
  2. Celery Worker - Processes background email generation tasks
  3. Redis - Message broker and result backend for Celery
  4. Flower (optional) - Web-based monitoring UI for Celery tasks
All services must be running for the application to function properly. This guide shows both quick-start and manual startup methods.

Prerequisites

Before starting, ensure you’ve completed:
Start all services with a single command:
# Activate virtual environment
source venv/bin/activate

# Start Redis (background)
make redis-start

# Start FastAPI + Celery worker together
make serve
What make serve does:
  • Starts uvicorn server on port 8000
  • Starts Celery worker with concurrency=1
  • Displays logs from both services in the terminal
Use Ctrl+C to stop both services. Redis continues running in the background.

Manual Start (Detailed Control)

For debugging or customization, start services individually:
1

Start Redis

Redis must be running before starting Celery:
brew services start redis
# Or: redis-server --daemonize yes
Verify Redis is running:
redis-cli ping
# Expected: PONG
2

Start FastAPI Server

Open a terminal and start the development server with hot reload:
source venv/bin/activate
uvicorn main:app --reload --host 0.0.0.0 --port 8000
Expected output:
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [12345] using StatReload
INFO:     Started server process [12346]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
The --reload flag enables automatic reloading when code changes are detected. Remove this in production.
3

Start Celery Worker

Open a new terminal and start the Celery worker:
source venv/bin/activate
celery -A celery_config.celery_app worker --loglevel=info --queues=email_default --concurrency=1
Expected output:
celery@hostname v5.3.0 (emerald-rush)

[config]
.> app:         scribe:0x...
.> transport:   redis://localhost:6379/0
.> results:     redis://localhost:6379/1
.> concurrency: 1 (solo)
.> task events: ON

[queues]
.> email_default    exchange=email_default(direct) key=email_default

[tasks]
  . tasks.email_tasks.generate_email_task

[2025-01-24 10:30:00,000: INFO/MainProcess] Connected to redis://localhost:6379/0
[2025-01-24 10:30:00,100: INFO/MainProcess] celery@hostname ready.
Concurrency=1 is critical for memory-constrained environments (Raspberry Pi). Each task uses ~400MB with Playwright browsers.
4

Start Flower Monitoring (Optional)

Open a third terminal for the Flower UI:
source venv/bin/activate
celery -A celery_config.celery_app flower
Access Flower:

Service Configuration

FastAPI Server (uvicorn)

Development:
uvicorn main:app --reload --host 0.0.0.0 --port 8000
Production:
uvicorn main:app --host 0.0.0.0 --port 8000 --timeout-keep-alive 120 --workers 1
Key Options:
  • --reload: Enable hot reload (development only)
  • --host 0.0.0.0: Listen on all network interfaces
  • --port 8000: HTTP port
  • --timeout-keep-alive 120: Keep-alive timeout for long-polling clients
  • --workers 1: Number of worker processes (increase for production)

Celery Worker

Standard Configuration:
celery -A celery_config.celery_app worker \
  --loglevel=info \
  --queues=email_default \
  --concurrency=1 \
  --pool=solo
Options Explained:
OptionValuePurpose
-Acelery_config.celery_appCelery app instance
--loglevelinfoLogging verbosity (debug, info, warning, error)
--queuesemail_defaultQueues to consume from
--concurrency1Number of parallel tasks (1 for Raspberry Pi)
--poolsoloExecution pool (solo = single-threaded, memory efficient)
Memory Optimization: Using --pool=solo and --concurrency=1 keeps memory usage under 512MB, ideal for Raspberry Pi deployments.

Redis Configuration

Default Configuration:
# .env file
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_DB=0  # Celery broker
REDIS_PASSWORD=
Redis DB Usage:
  • DB 0: Celery broker (task queue)
  • DB 1: Celery result backend (task state + results)
Check Redis Status:
redis-cli info

Health Checks

Verify all services are running correctly:

FastAPI Health Check

curl http://localhost:8000/health
Expected Response:
{
  "status": "healthy",
  "service": "scribe-api",
  "version": "1.0.0",
  "database": "connected",
  "environment": "development"
}
Possible causes:
  • Database connection failed → Check .env database credentials
  • Supabase service unreachable → Verify SUPABASE_URL
  • Missing environment variables → Review .env file
Response when degraded:
{
  "status": "degraded",
  "database": "disconnected"
}

API Documentation

FastAPI auto-generates interactive API docs:
The interactive docs allow you to test API endpoints directly from the browser. JWT authentication is supported via the “Authorize” button.

Celery Worker Health

Check worker status using Celery’s inspect commands:
# List active workers
celery -A celery_config.celery_app inspect active

# Check registered tasks
celery -A celery_config.celery_app inspect registered

# View worker stats
celery -A celery_config.celery_app inspect stats
Expected Output:
{
  "celery@hostname": {
    "active": [],
    "registered": [
      "tasks.email_tasks.generate_email_task",
      "health_check"
    ],
    "stats": {
      "total": {"tasks.email_tasks.generate_email_task": 10},
      "pool": {"max-concurrency": 1}
    }
  }
}

Flower Monitoring Dashboard

If Flower is running, access comprehensive monitoring: URL: http://localhost:5555 Features:
  • Real-time task progress
  • Worker resource usage (CPU, memory)
  • Task success/failure rates
  • Task execution history
  • Broker connection status
Flower is optional but highly recommended for debugging task issues and monitoring queue depth.

Testing the Pipeline

Generate a Test Email

Send a test request to the email generation endpoint:
curl -X POST http://localhost:8000/api/email/generate \
  -H "Authorization: Bearer YOUR_JWT_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "email_template": "Hi {{name}}, I love your work on {{research}}!",
    "recipient_name": "Dr. Yann LeCun",
    "recipient_interest": "deep learning"
  }'
Response:
{
  "task_id": "abc-123-def-456"
}

Poll Task Status

curl http://localhost:8000/api/email/status/abc-123-def-456 \
  -H "Authorization: Bearer YOUR_JWT_TOKEN"
Status Progression:
{"status": "PENDING"}   → Task queued
{"status": "STARTED", "current_step": "web_scraper"}  → Processing
{"status": "SUCCESS", "result": {"email_id": "email-uuid"}}  → Complete

Retrieve Generated Email

curl http://localhost:8000/api/email/email-uuid \
  -H "Authorization: Bearer YOUR_JWT_TOKEN"

Development Workflows

Hot Reload

When running with --reload, FastAPI automatically restarts when you edit:
  • Route handlers (api/routes/*.py)
  • Models (models/*.py)
  • Services (services/*.py)
Celery worker does NOT auto-reload. You must manually restart the worker after changing task code:
# Press Ctrl+C in the Celery terminal, then:
celery -A celery_config.celery_app worker --loglevel=info --queues=email_default --concurrency=1

Viewing Logs

FastAPI Logs:
  • Printed to the terminal where uvicorn is running
  • Structured logging via Logfire (if configured)
Celery Logs:
  • Printed to the terminal where celery worker is running
  • Includes task start, success, failure, and retry events
Example Celery Log:
[2025-01-24 10:30:00,000: INFO/MainProcess] Task tasks.email_tasks.generate_email_task[abc-123] received
[2025-01-24 10:30:00,100: INFO/ForkPoolWorker-1] Pipeline step: template_parser
[2025-01-24 10:30:02,500: INFO/ForkPoolWorker-1] Pipeline step: web_scraper
[2025-01-24 10:30:08,200: INFO/ForkPoolWorker-1] Pipeline step: email_composer
[2025-01-24 10:30:12,000: INFO/ForkPoolWorker-1] Task tasks.email_tasks.generate_email_task[abc-123] succeeded in 12.0s

Debugging

1

Enable debug logging

Set log level to DEBUG in your .env file:
LOG_LEVEL=DEBUG
Restart the FastAPI server and Celery worker.
2

Use Logfire for tracing

If LOGFIRE_TOKEN is configured, view distributed traces at:Filter by service_name=scribe-api or service_name=scribe-celery-worker.
3

Interactive debugging with pdb

Add breakpoints in your code:
import pdb; pdb.set_trace()
Run uvicorn without --reload to prevent automatic restarts.

Stopping Services

Stop All Services (Quick)

make stop-all
This kills all running uvicorn, celery, and flower processes.

Stop Services Manually

FastAPI Server:
  • Press Ctrl+C in the uvicorn terminal
Celery Worker:
  • Press Ctrl+C in the Celery terminal
  • Or send SIGTERM: pkill -f "celery.*worker"
Redis:
redis-cli shutdown
# Or: brew services stop redis (macOS)
# Or: sudo systemctl stop redis (Linux)
Flower:
  • Press Ctrl+C in the Flower terminal
Always stop Celery workers gracefully with Ctrl+C to allow them to finish in-progress tasks.

Production Setup

For production deployments, use systemd services or process managers:

Using systemd (Linux)

Create service files for each component: FastAPI Service (/etc/systemd/system/scribe-api.service):
[Unit]
Description=Scribe FastAPI Server
After=network.target

[Service]
Type=simple
User=scribe
WorkingDirectory=/home/scribe/pythonserver
Environment="PATH=/home/scribe/pythonserver/venv/bin"
ExecStart=/home/scribe/pythonserver/venv/bin/uvicorn main:app --host 0.0.0.0 --port 8000 --timeout-keep-alive 180
Restart=on-failure

[Install]
WantedBy=multi-user.target
Celery Worker Service (/etc/systemd/system/scribe-celery.service):
[Unit]
Description=Scribe Celery Worker
After=network.target redis.service

[Service]
Type=simple
User=scribe
WorkingDirectory=/home/scribe/pythonserver
Environment="PATH=/home/scribe/pythonserver/venv/bin"
ExecStart=/home/scribe/pythonserver/venv/bin/celery -A celery_config.celery_app worker --loglevel=info --queues=email_default --concurrency=1 --pool=solo
Restart=on-failure

[Install]
WantedBy=multi-user.target
Enable and start services:
sudo systemctl enable scribe-api scribe-celery
sudo systemctl start scribe-api scribe-celery

Troubleshooting

Error: redis.exceptions.ConnectionError: Error 111 connecting to localhost:6379Solution:
  1. Start Redis: redis-server --daemonize yes
  2. Verify: redis-cli ping → Should respond “PONG”
  3. Check port: Ensure Redis is listening on 6379 (or update .env)
Issue: Tasks stay in PENDING statusSolutions:
  1. Verify worker is running: celery -A celery_config.celery_app inspect active
  2. Check queue name matches: --queues=email_default
  3. Restart worker: Ctrl+C and rerun the worker command
  4. Clear Redis: redis-cli FLUSHALL (development only)
Error: OSError: [Errno 48] Address already in useSolution:
  1. Find process: lsof -i :8000
  2. Kill it: kill -9 <PID>
  3. Or use a different port: uvicorn main:app --port 8001
Issue: Worker crashes with “Killed” messageSolution:
  1. Verify concurrency=1: --concurrency=1
  2. Use solo pool: --pool=solo
  3. Add swap space (Raspberry Pi):
    sudo dphys-swapfile swapoff
    sudo nano /etc/dphys-swapfile  # Set CONF_SWAPSIZE=1024
    sudo dphys-swapfile setup
    sudo dphys-swapfile swapon
    
Issue: Code changes not reflectedSolution:
  • FastAPI: Ensure --reload flag is present
  • Celery: Manual restart required (no auto-reload)
  • Check file permissions: Virtual environment must be writable

Next Steps


Useful Commands Reference

# Quick start
make serve                  # Start FastAPI + Celery
make redis-start            # Start Redis
make stop-all               # Stop all services

# Manual control
uvicorn main:app --reload   # Start API server
celery -A celery_config.celery_app worker --loglevel=info --queues=email_default --concurrency=1  # Start worker
celery -A celery_config.celery_app flower  # Start monitoring

# Health checks
curl http://localhost:8000/health     # API health
redis-cli ping                        # Redis health
celery -A celery_config.celery_app inspect active  # Worker health

# Debugging
curl http://localhost:8000/docs       # Interactive API docs
open http://localhost:5555            # Flower dashboard

Build docs developers (and LLMs) love