Overview
This guide provides comprehensive installation instructions for the SpecKit Ticketing Platform. You’ll learn how to set up the development environment, configure services, and troubleshoot common issues.
Simplified Configuration : This project does NOT use .env files or external secrets. All configuration is pre-loaded in docker-compose.yml and appsettings.json for immediate deployment without manual file management.
Prerequisites
Before installing SpecKit, ensure your system meets these requirements.
System Requirements
Minimum Specifications:
RAM : 8 GB (16 GB recommended for optimal performance)
CPU : 4 cores (8 cores recommended)
Disk Space : 10 GB free
Operating System :
macOS 12+ (Intel or Apple Silicon)
Windows 10/11 with WSL2
Linux (Ubuntu 20.04+, Debian 11+, Fedora 35+)
Required Software
Docker Desktop
Docker Desktop 4.0+ with Docker Compose v2
Git
Git 2.30+ for version control
(Optional) .NET SDK
.NET 9 SDK - Only required for local backend development without Docker
(Optional) Node.js
Node.js 20+ and npm - Only required for local frontend development
Installing Docker Desktop
macOS (Homebrew)
macOS (Direct Download)
Windows (winget)
Ubuntu/Debian
Fedora
brew install --cask docker
# Start Docker Desktop from Applications
Verify Docker Installation
Confirm Docker is installed and running:
# Check Docker version
docker --version
# Expected: Docker version 24.0.0 or higher
# Check Docker Compose version
docker compose version
# Expected: Docker Compose version v2.20.0 or higher
# Test Docker
docker run hello-world
# Should download and run a test container
If you see permission errors on Linux, make sure you’ve added your user to the docker group and logged out/in or run newgrp docker.
Installation Methods
Choose the installation method that fits your use case.
Method 1: Docker Compose (Recommended)
This is the recommended approach for most users. It runs all services in containers with zero manual configuration.
Pros:
No local .NET or Node.js installation required
Consistent environment across all platforms
One command to start everything
Automatic service dependency management
Cons:
Higher resource usage (all services in containers)
Slower code reload during development
Method 2: Hybrid (Docker Infrastructure + Local Services)
Run infrastructure (PostgreSQL, Redis, Kafka) in Docker but run microservices locally for faster development iteration.
Pros:
Fast code reload with dotnet watch
Lower resource usage (fewer containers)
Direct debugging in IDE
Cons:
Requires .NET 9 SDK and Node.js installed
More complex setup
Manual service startup
Method 3: Full Local (Advanced)
Run everything locally without Docker. Only recommended for advanced users or systems where Docker isn’t available.
Pros:
Maximum control
Lowest latency
Cons:
Complex setup (PostgreSQL, Redis, Kafka installation)
Platform-specific issues
Manual dependency management
This guide focuses on Method 1 (Docker Compose) . For hybrid or full local setup, see the Development Guide .
Installation Steps
Step 1: Clone the Repository
Clone the SpecKit repository:
Step 2: Understand the Project Structure
speckit-ticketing/
├── infra/ # Infrastructure configuration
│ ├── docker-compose.yml # Service orchestration
│ ├── db/
│ │ └── init-schemas.sql # Database schema initialization
│ └── kafka-init.sh # Kafka topic creation
├── services/ # Microservices
│ ├── catalog/
│ ├── inventory/
│ ├── ordering/
│ ├── payment/
│ ├── fulfillment/
│ ├── notification/
│ └── identity/
├── frontend/ # Next.js application
├── README.md
└── FRONTEND_API_GUIDE.md
Step 3: Navigate to Infrastructure Directory
All Docker Compose commands should be run from the infra/ directory.
Step 4: Review Docker Compose Configuration
Before starting services, let’s understand what docker-compose.yml configures:
Infrastructure Services
PostgreSQL
postgres :
image : postgres:17
ports :
- "5432:5432"
environment :
POSTGRES_USER : postgres
POSTGRES_PASSWORD : postgres
POSTGRES_DB : ticketing
volumes :
- postgres-data:/var/lib/postgresql/data
- ./db/init-schemas.sql:/docker-entrypoint-initdb.d/01-init-schemas.sql
Port : 5432 (standard PostgreSQL port)
Database : ticketing
Schemas : bc_identity, bc_catalog, bc_inventory, bc_ordering, bc_payment, bc_fulfillment, bc_notification
Persistence : Data stored in Docker volume postgres-data
Redis
redis :
image : redis:7
ports :
- "6379:6379"
Port : 6379 (standard Redis port)
Usage : Distributed locks, reservation TTL management
Kafka + Zookeeper
zookeeper :
image : confluentinc/cp-zookeeper:7.5.0
ports :
- "2181:2181"
kafka :
image : confluentinc/cp-kafka:7.5.0
ports :
- "9092:9092"
depends_on :
- zookeeper
Kafka Port : 9092
Zookeeper Port : 2181
Topics : reservation-created, reservation-expired, payment-succeeded, payment-failed, ticket-issued
Application Services
Each microservice follows this pattern:
service-name :
build :
context : ..
dockerfile : services/service-name/src/Api/Dockerfile
ports :
- "PORT:PORT"
environment :
- ASPNETCORE_ENVIRONMENT=Development
- ConnectionStrings__Default=Host=postgres;Port=5432;Database=ticketing;Username=postgres;Password=postgres;SearchPath=bc_service;Include Error Detail=true
- Kafka__BootstrapServers=kafka:9092
depends_on :
postgres :
condition : service_healthy
redis :
condition : service_healthy
kafka :
condition : service_healthy
Service Ports:
Step 5: Start All Services
Start the entire platform:
What Happens:
Image Download/Build
Docker downloads base images (PostgreSQL, Redis, Kafka) and builds service images from Dockerfiles. First run takes 5-10 minutes.
Network Creation
Docker creates network speckit-net for inter-service communication.
Volume Creation
Docker creates volumes for PostgreSQL data and fulfillment storage.
Infrastructure Startup
PostgreSQL, Redis, and Kafka start and run health checks.
Schema Initialization
PostgreSQL runs init-schemas.sql to create all service schemas.
Kafka Topics Creation
kafka-init container creates required topics and exits.
Service Startup
Microservices start in dependency order (wait for healthy infrastructure).
Frontend Startup
Next.js app builds and starts on port 3000.
Step 6: Monitor Startup
Watch logs to confirm successful startup:
# Follow all logs
docker compose logs -f
# Follow specific service
docker compose logs -f catalog
# Follow multiple services
docker compose logs -f postgres kafka inventory
Key Messages to Watch For:
✅ PostgreSQL:
LOG: database system is ready to accept connections
✅ Redis:
Ready to accept connections tcp
✅ Kafka:
[KafkaServer id=1] started
✅ Kafka Topics:
Topics created successfully!
✅ Service (example - Catalog):
Now listening on: http://[::]:5001
Press Ctrl+C to stop following logs. Services continue running in the background.
Step 7: Verify Health
Check all containers are running and healthy:
Expected output:
NAME STATUS
speckit-catalog Up (healthy)
speckit-fulfillment Up (healthy)
speckit-identity Up (healthy)
speckit-inventory Up (healthy)
speckit-kafka Up (healthy)
speckit-kafka-init Exited (0)
speckit-notification Up (healthy)
speckit-ordering Up (healthy)
speckit-payment Up (healthy)
speckit-postgres Up (healthy)
speckit-redis Up (healthy)
speckit-zookeeper Up (healthy)
speckit-frontend Up
kafka-init shows Exited (0) - this is normal. It’s a one-time initialization container.
Test health endpoints:
# Test all services
for port in 50000 50001 50002 5003 5004 50004 50005 ; do
echo "Testing port $port :"
curl -s http://localhost: $port /health | jq .
done
Expected response from each:
{
"status" : "Healthy" ,
"service" : "ServiceName"
}
Frontend:
Open http://localhost:3000 in your browser.
API Services:
All services expose REST APIs:
Database:
Connect with any PostgreSQL client:
Host: localhost
Port: 5432
Database: ticketing
Username: postgres
Password: postgres
Recommended Tools:
Environment Configuration
Connection Strings
All connection strings are pre-configured in docker-compose.yml. You don’t need to create .env files.
PostgreSQL (from host):
Host=localhost;Port=5432;Database=ticketing;Username=postgres;Password=postgres;SearchPath=bc_<service>
PostgreSQL (from container):
Host=postgres;Port=5432;Database=ticketing;Username=postgres;Password=postgres;SearchPath=bc_<service>
Redis (from host):
Redis (from container):
Kafka (from host):
Kafka (from container):
Database Schemas
Each service has its own PostgreSQL schema:
Schema Service Ownership bc_identityIdentity User accounts, roles, tokens bc_catalogCatalog Events, venues, seats (read model) bc_inventoryInventory Reservations, seat availability bc_orderingOrdering Orders, carts, order items bc_paymentPayment Payment records, transactions bc_fulfillmentFulfillment Tickets, QR codes, PDFs bc_notificationNotification Email logs, notification queue
Schema Initialization:
Schemas are created automatically on first startup via infra/db/init-schemas.sql:
CREATE SCHEMA IF NOT EXISTS bc_identity;
CREATE SCHEMA IF NOT EXISTS bc_catalog;
CREATE SCHEMA IF NOT EXISTS bc_inventory;
CREATE SCHEMA IF NOT EXISTS bc_ordering;
CREATE SCHEMA IF NOT EXISTS bc_payment;
CREATE SCHEMA IF NOT EXISTS bc_fulfillment;
CREATE SCHEMA IF NOT EXISTS bc_notification;
Migrations:
Each service runs EF Core migrations scoped to its schema:
# Example: Run migrations for Catalog service
cd services/catalog/src/Api
dotnet ef database update
Never manually edit init-schemas.sql in production. Use EF Core migrations to manage schema changes.
Kafka Topics
Topics are created automatically by the kafka-init container:
Topic Producer Consumers Purpose reservation-createdInventory Ordering New seat reservation reservation-expiredInventory Ordering, Notification Reservation TTL elapsed payment-succeededPayment Ordering, Inventory, Fulfillment Payment completed payment-failedPayment Ordering, Notification Payment failed ticket-issuedFulfillment Notification Ticket generated
Topic Configuration:
# List topics
docker exec speckit-kafka kafka-topics --list --bootstrap-server localhost:9092
# Describe a topic
docker exec speckit-kafka kafka-topics --describe --topic reservation-created --bootstrap-server localhost:9092
# Consume messages (debug)
docker exec speckit-kafka kafka-console-consumer --bootstrap-server localhost:9092 --topic reservation-created --from-beginning
Service Configuration
Services are configured via environment variables in docker-compose.yml:
Common Environment Variables:
environment :
# ASP.NET Core
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:5000
# Database
- ConnectionStrings__Default=Host=postgres;Port=5432;Database=ticketing;Username=postgres;Password=postgres;SearchPath=bc_service
# Redis (Inventory only)
- ConnectionStrings__Redis=redis:6379
# Kafka
- Kafka__BootstrapServers=kafka:9092
Service-Specific Variables:
Notification Service:
- SmtpEmailOptions__UseDevMode=true
- SmtpEmailOptions__SmtpServer=localhost
- SmtpEmailOptions__SmtpPort=587
- [email protected]
In development, emails are logged to console instead of sent.
Troubleshooting
Common Issues
Issue: Port Already in Use
Error:
Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:5432 -> 0.0.0.0:0: listen tcp 0.0.0.0:5432: bind: address already in use
Cause: Another service (e.g., local PostgreSQL) is using the port.
Solution 1 - Stop conflicting service:
macOS
Linux
Windows (PowerShell)
# Find process using port
lsof -i :5432
# Kill process (replace PID)
kill -9 < PI D >
Solution 2 - Change port mapping:
Edit docker-compose.yml:
postgres :
ports :
- "15432:5432" # Changed from 5432:5432
Update service connection strings to use localhost:15432.
Issue: Services Fail Health Checks
Error:
WARNING: Service "catalog" is unhealthy
Diagnosis:
# Check service logs
docker compose logs catalog
# Check container status
docker inspect speckit-catalog
# Test health endpoint manually
curl http://localhost:50001/health
Common Causes:
Database migration failed
Npgsql.PostgresException: 42P01: relation "catalog.events" does not exist
Fix: Ensure init-schemas.sql ran successfully:
docker compose logs postgres | grep "init-schemas.sql"
Kafka not ready
Connection to Kafka failed: kafka:9092
Fix: Wait for Kafka to fully start:
docker compose logs kafka | grep "started"
Build failure
ERROR [internal] load metadata for mcr.microsoft.com/dotnet/aspnet:9.0
Fix: Rebuild images:
docker compose build --no-cache
docker compose up -d
Issue: Database Connection Refused
Error:
Npgsql.NpgsqlException: Failed to connect to [::1]:5432
Cause: PostgreSQL hasn’t finished initializing.
Fix:
# Wait for PostgreSQL to be ready
docker compose logs postgres -f
# Look for:
# "database system is ready to accept connections"
# Check health
docker compose ps postgres
# Should show: Up (healthy)
Issue: Kafka Topics Not Created
Error:
Kafka topic 'reservation-created' does not exist
Diagnosis:
# Check kafka-init logs
docker compose logs kafka-init
# List topics
docker exec speckit-kafka kafka-topics --list --bootstrap-server localhost:9092
Fix:
# Manually create topics
docker exec speckit-kafka kafka-topics --create \
--if-not-exists \
--bootstrap-server localhost:9092 \
--replication-factor 1 \
--partitions 1 \
--topic reservation-created
# Repeat for other topics:
# reservation-expired, payment-succeeded, payment-failed, ticket-issued
Issue: Frontend Build Fails
Error:
ERROR: failed to solve: process "/bin/sh -c npm run build" did not complete successfully
Cause: Missing environment variables or dependency issues.
Fix:
# Check frontend build logs
docker compose logs frontend
# Rebuild with no cache
docker compose build --no-cache frontend
docker compose up -d frontend
Issue: High Memory Usage
Symptom: System becomes slow, Docker uses excessive RAM.
Diagnosis:
# Check container resource usage
docker stats
Fix:
Adjust Docker Desktop resource limits:
macOS/Windows:
Open Docker Desktop
Settings → Resources
Increase Memory limit to 8 GB minimum
Click “Apply & Restart”
Linux:
Docker uses available system memory. Close other applications.
Issue: “Reservation not found” in Ordering
Error:
{
"error" : "Reservation not found"
}
Cause: Kafka event hasn’t propagated from Inventory to Ordering.
Fix:
Wait 2-3 seconds after creating reservation before adding to cart:
# Reserve seat
curl -X POST http://localhost:50002/reservations -d '{...}'
# WAIT
sleep 3
# Add to cart
curl -X POST http://localhost:5003/cart/add -d '{...}'
Verify event flow:
# Consume reservation-created events
docker exec speckit-kafka kafka-console-consumer \
--bootstrap-server localhost:9092 \
--topic reservation-created \
--from-beginning
Reset Everything
For a complete fresh start:
# Stop all containers
docker compose down
# Remove volumes (DELETES ALL DATA)
docker compose down -v
# Remove images (forces rebuild)
docker compose down --rmi all
# Remove everything including orphaned resources
docker system prune -a --volumes
# Start fresh
docker compose up -d
This deletes ALL data including database contents, reservations, orders, and tickets. Only use when you need a completely clean slate.
Production Considerations
The default configuration is NOT production-ready . The following sections outline required changes for production deployment.
Security
1. Change Default Credentials
Never use default credentials in production:
# DO NOT USE IN PRODUCTION:
environment :
POSTGRES_PASSWORD : postgres # ❌
Use secrets management:
environment :
POSTGRES_PASSWORD_FILE : /run/secrets/db_password
secrets :
db_password :
external : true
2. Enable TLS
Enable SSL for PostgreSQL, Redis, and Kafka:
environment :
- ConnectionStrings__Default=Host=postgres;Port=5432;Database=ticketing;Username=postgres;Password=${DB_PASSWORD};SSL Mode=Require;Trust Server Certificate=true
3. Network Isolation
Don’t expose infrastructure ports publicly:
postgres :
ports :
- "127.0.0.1:5432:5432" # Only localhost
# Better: No ports exposed, use internal network
Scalability
1. Separate Databases
Use separate PostgreSQL instances per service for true independence:
postgres-catalog :
image : postgres:17
environment :
POSTGRES_DB : catalog
postgres-inventory :
image : postgres:17
environment :
POSTGRES_DB : inventory
2. Redis Cluster
Use Redis Cluster or Sentinel for high availability:
redis :
image : redis:7-alpine
command : redis-server --appendonly yes --cluster-enabled yes
3. Kafka Cluster
Deploy multi-broker Kafka cluster:
kafka-1 :
environment :
KAFKA_BROKER_ID : 1
kafka-2 :
environment :
KAFKA_BROKER_ID : 2
kafka-3 :
environment :
KAFKA_BROKER_ID : 3
4. Horizontal Scaling
Scale services with multiple replicas:
# Scale Catalog service to 3 instances
docker compose up -d --scale catalog= 3
Requires load balancer (nginx, Traefik, or cloud LB).
Monitoring
1. Centralized Logging
Use ELK stack or cloud logging:
catalog :
logging :
driver : "json-file"
options :
max-size : "10m"
max-file : "3"
2. Metrics & Tracing
Enable OpenTelemetry exporters:
environment :
- OpenTelemetry__Endpoint=http://jaeger:4317
- OpenTelemetry__ServiceName=catalog-service
3. Health Monitoring
Use external health check services (Datadog, New Relic, etc.).
Backup
PostgreSQL Backup:
# Automated daily backups
docker exec speckit-postgres pg_dumpall -U postgres > backup_ $( date +%Y%m%d ) .sql
# Restore
cat backup_20260304.sql | docker exec -i speckit-postgres psql -U postgres
Redis Persistence:
Enable AOF (Append-Only File):
redis :
command : redis-server --appendonly yes
volumes :
- redis-data:/data
Next Steps
API Reference Explore all available endpoints
Architecture Deep dive into system design
Development Set up local development
Deployment Production deployment guide
Additional Resources
Join our community on Discord for help with installation issues and to share your SpecKit deployment stories!