Overview
The SpecKit Ticketing Platform is designed for containerized deployment with Docker and Docker Compose. This guide covers local development deployment, production Docker deployment, and Kubernetes orchestration.
Prerequisites
Docker Docker 24.0+ and Docker Compose v2.0+
Hardware Minimum 4GB RAM, 2 CPU cores, 10GB disk
Network Ports 3000, 5003-5005, 50000-50002, 5432, 6379, 9092 available
Operating System Linux, macOS, or Windows with WSL2
Local Development Deployment
Clone the repository
git clone https://github.com/JostinAlvaradoS/ticketing_project_week0.git
cd ticketing_project_week0
Start infrastructure and services
All services are defined in the docker-compose.yml file: cd infra
docker compose up -d
This starts:
PostgreSQL (port 5432)
Redis (port 6379)
Zookeeper (port 2181)
Kafka (port 9092)
Identity Service (port 50000)
Catalog Service (port 50001)
Inventory Service (port 50002)
Ordering Service (port 5003)
Payment Service (port 5004)
Fulfillment Service (port 50004)
Notification Service (port 50005)
Frontend (port 3000)
Verify services are healthy
Wait approximately 30 seconds for initialization, then check health: # Check all services
docker compose ps
# Verify health endpoints
curl http://localhost:50001/health # Catalog
curl http://localhost:50002/health # Inventory
curl http://localhost:5003/health # Ordering
Expected response: {
"status" : "Healthy" ,
"service" : "Catalog"
}
No .env files required : All configuration is pre-loaded in docker-compose.yml and appsettings.json for easy peer review and immediate startup.
Docker Compose Configuration
The platform uses a single docker-compose.yml file located in the infra/ directory. Here’s the structure:
PostgreSQL Configuration
postgres :
image : postgres:17
container_name : speckit-postgres
environment :
POSTGRES_USER : postgres
POSTGRES_PASSWORD : postgres
POSTGRES_DB : ticketing
ports :
- "5432:5432"
volumes :
- postgres-data:/var/lib/postgresql/data
- ./db/init-schemas.sql:/docker-entrypoint-initdb.d/01-init-schemas.sql:ro
Key Points:
Single PostgreSQL instance with multiple schemas (bc_catalog, bc_inventory, bc_ordering, etc.)
Automatic schema initialization via init-schemas.sql
Persistent volume for data
Health check with pg_isready
Redis Configuration
redis :
image : redis:7
container_name : speckit-redis
ports :
- "6379:6379"
Used for:
Distributed locking for seat reservations
15-minute TTL for reservation keys
Single instance (not clustered)
Kafka Configuration
zookeeper :
image : confluentinc/cp-zookeeper:7.5.0
environment :
ZOOKEEPER_CLIENT_PORT : 2181
kafka :
image : confluentinc/cp-kafka:7.5.0
depends_on :
- zookeeper
ports :
- "9092:9092"
environment :
KAFKA_BROKER_ID : 1
KAFKA_ZOOKEEPER_CONNECT : 'zookeeper:2181'
KAFKA_LISTENERS : 'PLAINTEXT://0.0.0.0:9092'
KAFKA_ADVERTISED_LISTENERS : 'PLAINTEXT://kafka:9092'
Topics created automatically:
reservation-created
reservation-expired
payment-succeeded
payment-failed
ticket-issued
Service Configuration
Each microservice follows this pattern:
catalog :
build :
context : ..
dockerfile : services/catalog/src/Api/Dockerfile
ports :
- "50001:5001"
environment :
- ASPNETCORE_ENVIRONMENT=Development
- ConnectionStrings__Default=Host=postgres;Port=5432;Database=ticketing;Username=postgres;Password=postgres;SearchPath=bc_catalog
- Kafka__BootstrapServers=kafka:9092
depends_on :
postgres :
condition : service_healthy
kafka :
condition : service_healthy
Production Deployment
Environment Variables
For production, override these environment variables:
# Database
CONNECTION_STRING = Host = prod-postgres ; Port = 5432 ; Database = ticketing ; Username = app_user ; Password =< secure_password >
# Redis
REDIS_CONNECTION = prod-redis:6379, password =< redis_password >
# Kafka
KAFKA_BOOTSTRAP_SERVERS = prod-kafka-1:9092,prod-kafka-2:9092,prod-kafka-3:9092
# ASP.NET Core
ASPNETCORE_ENVIRONMENT = Production
ASPNETCORE_URLS = http://+:5001
Docker Compose Override
Create a docker-compose.prod.yml file:
version : '3.8'
services :
postgres :
environment :
POSTGRES_PASSWORD : ${POSTGRES_PASSWORD}
volumes :
- /mnt/data/postgres:/var/lib/postgresql/data
redis :
command : redis-server --requirepass ${REDIS_PASSWORD}
catalog :
environment :
- ASPNETCORE_ENVIRONMENT=Production
- ConnectionStrings__Default=${CONNECTION_STRING}
restart : always
Deploy with:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Kubernetes Deployment
Namespace Setup
apiVersion : v1
kind : Namespace
metadata :
name : speckit-ticketing
PostgreSQL StatefulSet
apiVersion : apps/v1
kind : StatefulSet
metadata :
name : postgres
namespace : speckit-ticketing
spec :
serviceName : postgres
replicas : 1
selector :
matchLabels :
app : postgres
template :
metadata :
labels :
app : postgres
spec :
containers :
- name : postgres
image : postgres:17
env :
- name : POSTGRES_DB
value : "ticketing"
- name : POSTGRES_USER
valueFrom :
secretKeyRef :
name : postgres-secret
key : username
- name : POSTGRES_PASSWORD
valueFrom :
secretKeyRef :
name : postgres-secret
key : password
ports :
- containerPort : 5432
volumeMounts :
- name : postgres-storage
mountPath : /var/lib/postgresql/data
volumeClaimTemplates :
- metadata :
name : postgres-storage
spec :
accessModes : [ "ReadWriteOnce" ]
resources :
requests :
storage : 10Gi
Microservice Deployment
apiVersion : apps/v1
kind : Deployment
metadata :
name : catalog-service
namespace : speckit-ticketing
spec :
replicas : 3
selector :
matchLabels :
app : catalog
template :
metadata :
labels :
app : catalog
spec :
containers :
- name : catalog
image : ghcr.io/jostinalvarados/ticketing-catalog:latest
env :
- name : ConnectionStrings__Default
valueFrom :
secretKeyRef :
name : catalog-secret
key : connection-string
- name : Kafka__BootstrapServers
value : "kafka-service:9092"
ports :
- containerPort : 5001
livenessProbe :
httpGet :
path : /health
port : 5001
initialDelaySeconds : 30
periodSeconds : 10
readinessProbe :
httpGet :
path : /health
port : 5001
initialDelaySeconds : 5
periodSeconds : 5
---
apiVersion : v1
kind : Service
metadata :
name : catalog-service
namespace : speckit-ticketing
spec :
selector :
app : catalog
ports :
- port : 80
targetPort : 5001
type : ClusterIP
Kafka with Strimzi Operator
For production Kafka, use the Strimzi operator:
apiVersion : kafka.strimzi.io/v1beta2
kind : Kafka
metadata :
name : speckit-kafka
namespace : speckit-ticketing
spec :
kafka :
version : 3.5.0
replicas : 3
listeners :
- name : plain
port : 9092
type : internal
tls : false
config :
offsets.topic.replication.factor : 3
transaction.state.log.replication.factor : 3
transaction.state.log.min.isr : 2
storage :
type : persistent-claim
size : 10Gi
zookeeper :
replicas : 3
storage :
type : persistent-claim
size : 10Gi
Cloud Provider Deployments
AWS ECS Fargate
Create ECR repositories for each service
Build and push images :
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin < account-i d > .dkr.ecr.us-east-1.amazonaws.com
docker build -t catalog -f services/catalog/src/Api/Dockerfile .
docker tag catalog:latest < account-i d > .dkr.ecr.us-east-1.amazonaws.com/catalog:latest
docker push < account-i d > .dkr.ecr.us-east-1.amazonaws.com/catalog:latest
Use RDS for PostgreSQL , ElastiCache for Redis, MSK for Kafka
Create ECS task definitions for each service
Deploy with ECS Fargate using Application Load Balancer
Azure Container Apps
Create Azure Container Registry
Deploy Azure Database for PostgreSQL
Deploy Azure Cache for Redis
Deploy Azure Event Hubs (Kafka-compatible)
Create Container Apps :
az containerapp create \
--name catalog-service \
--resource-group speckit-rg \
--environment speckit-env \
--image < acr-nam e > .azurecr.io/catalog:latest \
--target-port 5001 \
--ingress external
Google Cloud Run
Push images to GCR :
docker tag catalog:latest gcr.io/ < project-i d > /catalog:latest
docker push gcr.io/ < project-i d > /catalog:latest
Deploy to Cloud Run :
gcloud run deploy catalog-service \
--image gcr.io/ < project-i d > /catalog:latest \
--platform managed \
--region us-central1 \
--allow-unauthenticated
Use Cloud SQL, Memorystore, and Pub/Sub
Monitoring and Observability
Health Checks
All services expose /health endpoints:
curl http://localhost:50001/health
Logging
View logs with Docker Compose:
# All services
docker compose logs -f
# Specific service
docker compose logs -f catalog
# Last 100 lines
docker compose logs --tail=100 inventory
Metrics
Add Prometheus and Grafana:
prometheus :
image : prom/prometheus:latest
ports :
- "9090:9090"
volumes :
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
grafana :
image : grafana/grafana:latest
ports :
- "3001:3000"
environment :
- GF_SECURITY_ADMIN_PASSWORD=admin
Scaling
Horizontal Scaling
Scale services with Docker Compose:
docker compose up -d --scale catalog= 3 --scale inventory= 3
Or in Kubernetes:
kubectl scale deployment catalog-service --replicas=5 -n speckit-ticketing
Database Scaling
Read replicas for PostgreSQL
Redis Cluster for distributed caching
Kafka partitions for parallel processing
Backup and Recovery
PostgreSQL Backup
# Backup
docker exec speckit-postgres pg_dump -U postgres ticketing > backup.sql
# Restore
docker exec -i speckit-postgres psql -U postgres ticketing < backup.sql
Automated Backups
Add a backup service to docker-compose.yml:
backup :
image : prodrigestivill/postgres-backup-local
environment :
- POSTGRES_HOST=postgres
- POSTGRES_DB=ticketing
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- SCHEDULE=@daily
volumes :
- ./backups:/backups
Troubleshooting
Service Won’t Start
# Check logs
docker compose logs catalog
# Check container status
docker compose ps
# Restart service
docker compose restart catalog
Database Connection Issues
# Test PostgreSQL connection
docker exec -it speckit-postgres psql -U postgres -d ticketing
# Check schemas
\dn
# Check tables in catalog schema
SET search_path TO bc_catalog ;
\dt
Kafka Issues
# List topics
docker exec speckit-kafka kafka-topics --list --bootstrap-server localhost:9092
# Check consumer groups
docker exec speckit-kafka kafka-consumer-groups --list --bootstrap-server localhost:9092
# View messages
docker exec speckit-kafka kafka-console-consumer --bootstrap-server localhost:9092 --topic reservation-created --from-beginning
Security Considerations
The default configuration is for development only. Do NOT use default passwords in production.
Production Security Checklist
Next Steps
Testing Strategy Learn how to test the platform with unit, integration, and smoke tests
Frontend Integration Integrate your frontend with the REST APIs and handle reservation flows