Skip to main content

Microservices Design

SpecKit’s microservices architecture divides the ticketing platform into seven independent services, each responsible for a specific business capability. This design enables independent deployment, scaling, and development.

Service Catalog

Catalog Service

Port: 5001
Responsibility: Event browsing and seat information
Capabilities:
  • List all events
  • Get event details
  • Retrieve seat maps with real-time availability
Technology:
  • PostgreSQL (schema: bc_catalog)
  • Kafka consumer (updates seat status from inventory)

Inventory Service

Port: 5002
Responsibility: Seat reservations and availability management
Capabilities:
  • Create seat reservations with distributed locking
  • Manage reservation expiry (15-minute TTL)
  • Publish reservation lifecycle events
Technology:
  • PostgreSQL (schema: bc_inventory)
  • Redis (distributed locks)
  • Kafka producer/consumer

Ordering Service

Port: 5003
Responsibility: Shopping cart and order management
Capabilities:
  • Maintain draft orders (cart)
  • Checkout and order finalization
  • Validate reservations before purchase
Technology:
  • PostgreSQL (schema: bc_ordering)
  • Kafka consumer (reservation events)
  • In-memory ReservationStore

Payment Service

Port: 5004
Responsibility: Payment processing and validation
Capabilities:
  • Process payments (simulated gateway)
  • Validate order and reservation states
  • Publish payment success/failure events
Technology:
  • PostgreSQL (schema: bc_payment)
  • Kafka producer/consumer
  • Event-based validation

Fulfillment Service

Port: 5005
Responsibility: Ticket generation and delivery
Capabilities:
  • Generate digital tickets
  • Link tickets to completed payments
  • Publish ticket-issued events
Technology:
  • PostgreSQL (schema: bc_fulfillment)
  • Kafka consumer

Identity Service

Port: 5000
Responsibility: User authentication and authorization
Capabilities:
  • User registration and login
  • JWT token generation
  • Guest token management
Technology:
  • PostgreSQL (schema: bc_identity)
  • BCrypt password hashing

Notification Service

Port: 5006
Responsibility: Customer notifications
Capabilities:
  • Send ticket delivery emails
  • Track notification history
  • SMTP integration
Technology:
  • PostgreSQL (schema: bc_notification)
  • Kafka consumer
  • SMTP client

Service Boundaries

Boundary Principles

Rule: Each service exclusively owns its data schema
// services/inventory/src/Infrastructure/Persistence/DbInitializer.cs
public async Task InitializeAsync()
{
    var connection = _db.Database.GetDbConnection();
    await connection.OpenAsync();
    
    // Create dedicated schema
    using var createCommand = connection.CreateCommand();
    createCommand.CommandText = @"
        CREATE SCHEMA IF NOT EXISTS bc_inventory;
        ALTER SCHEMA bc_inventory OWNER TO postgres;
    ";
    await createCommand.ExecuteNonQueryAsync();
    
    // Apply migrations only for this service
    await _db.Database.MigrateAsync();
}
Benefits:
  • No cross-schema foreign keys
  • Independent schema evolution
  • Clear data boundaries

Communication Patterns

1. Synchronous REST Communication

Use Case: Frontend queries Catalog Service for events
// services/catalog/src/Api/Controllers/EventsController.cs
[ApiController]
[Route("api/[controller]")]
public class EventsController : ControllerBase
{
    private readonly IMediator _mediator;

    [HttpGet]
    public async Task<IActionResult> GetAllEvents(
        CancellationToken cancellationToken)
    {
        var query = new GetAllEventsQuery();
        var response = await _mediator.Send(query, cancellationToken);
        return Ok(response);
    }

    [HttpGet("{eventId}/seatmap")]
    public async Task<IActionResult> GetEventSeatmap(
        Guid eventId,
        CancellationToken cancellationToken)
    {
        var query = new GetEventSeatmapQuery(eventId);
        var response = await _mediator.Send(query, cancellationToken);
        return Ok(response);
    }
}
Handler Implementation:
// services/catalog/src/Application/UseCases/GetAllEvents/GetAllEventsHandler.cs
public class GetAllEventsHandler 
    : IRequestHandler<GetAllEventsQuery, GetAllEventsResponse>
{
    private readonly ICatalogRepository _repository;

    public async Task<GetAllEventsResponse> Handle(
        GetAllEventsQuery request,
        CancellationToken cancellationToken)
    {
        var events = await _repository.GetAllEventsAsync(cancellationToken);
        
        return new GetAllEventsResponse(
            events.Select(e => new EventDto
            {
                Id = e.Id,
                Name = e.Name,
                Date = e.Date,
                Location = e.Location
            })
        );
    }
}
When to Use:
  • User-initiated queries requiring immediate response
  • Operations within a single bounded context
  • Simple CRUD operations

2. Asynchronous Event-Driven Communication

Use Case: Reservation lifecycle across multiple servicesStep 1: Inventory publishes reservation-created event
// services/inventory/src/Application/UseCases/CreateReservation/CreateReservationCommandHandler.cs
private async Task PublishReservationCreatedEvent(
    Reservation reservation, 
    Seat seat, 
    CancellationToken cancellationToken)
{
    var @event = new ReservationCreatedEvent(
        EventId: Guid.NewGuid().ToString("D"),
        ReservationId: reservation.Id.ToString("D"),
        CustomerId: reservation.CustomerId,
        SeatId: reservation.SeatId.ToString("D"),
        SeatNumber: $"{seat.Section}-{seat.Row}-{seat.Number}",
        Section: seat.Section,
        BasePrice: 0m,
        CreatedAt: reservation.CreatedAt,
        ExpiresAt: reservation.ExpiresAt,
        Status: reservation.Status
    );

    var json = JsonSerializer.Serialize(@event, _jsonOptions);
    await _kafkaProducer.ProduceAsync(
        "reservation-created", 
        json, 
        reservation.SeatId.ToString("N")
    );
}
Step 2: Ordering consumes event and updates local state
// services/ordering/src/Infrastructure/Events/ReservationEventConsumer.cs
public class ReservationEventConsumer : BackgroundService
{
    protected override async Task ExecuteAsync(
        CancellationToken stoppingToken)
    {
        var config = new ConsumerConfig
        {
            BootstrapServers = _kafkaOptions.BootstrapServers,
            GroupId = _kafkaOptions.ConsumerGroupId,
            AutoOffsetReset = AutoOffsetReset.Earliest,
            EnableAutoCommit = true
        };

        using var consumer = new ConsumerBuilder<string, string>(config)
            .Build();

        consumer.Subscribe(new[] { 
            "reservation-created", 
            "reservation-expired",
            "payment-succeeded" 
        });

        while (!stoppingToken.IsCancellationRequested)
        {
            var consumeResult = consumer.Consume(stoppingToken);
            await ProcessMessage(
                consumeResult.Topic, 
                consumeResult.Message.Value, 
                stoppingToken
            );
        }
    }

    private async Task ProcessMessage(
        string topic, 
        string messageValue, 
        CancellationToken cancellationToken)
    {
        using var scope = _serviceProvider.CreateScope();
        var reservationStore = scope.ServiceProvider
            .GetRequiredService<ReservationStore>();

        switch (topic)
        {
            case "reservation-created":
                var createdEvent = JsonSerializer
                    .Deserialize<ReservationCreatedEvent>(messageValue);
                reservationStore.AddReservation(createdEvent);
                break;

            case "reservation-expired":
                var expiredEvent = JsonSerializer
                    .Deserialize<ReservationExpiredEvent>(messageValue);
                reservationStore.RemoveReservation(expiredEvent);
                break;
        }
    }
}
When to Use:
  • Cross-service workflows (checkout → payment → fulfillment)
  • Long-running processes
  • Services should remain decoupled

Service Independence Strategies

Strategy: Each service maintains its own projection of shared dataExample: Ordering service needs reservation data
// services/ordering/src/Infrastructure/Events/ReservationStore.cs
public class ReservationStore
{
    // In-memory cache of active reservations
    private readonly ConcurrentDictionary<string, ReservationCreatedEvent> 
        _activeReservations = new();

    public void AddReservation(ReservationCreatedEvent @event)
    {
        _activeReservations[@event.ReservationId] = @event;
    }

    public void RemoveReservation(ReservationExpiredEvent @event)
    {
        _activeReservations.TryRemove(@event.ReservationId, out _);
    }

    public ReservationCreatedEvent? GetReservation(string seatId)
    {
        return _activeReservations.Values
            .FirstOrDefault(r => r.SeatId == seatId 
                && r.Status == "active");
    }
}
Benefits:
  • No direct database queries to Inventory service
  • Eventually consistent read model
  • Ordering service remains operational even if Inventory is down

Service Discovery & Configuration

# infra/docker-compose.yml (simplified)
version: '3.8'

services:
  # Infrastructure
  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: ticketing
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
    ports:
      - "5432:5432"

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

  kafka:
    image: confluentinc/cp-kafka:7.5.0
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
    ports:
      - "9092:9092"

  # Microservices
  inventory-service:
    build: ../services/inventory
    environment:
      - ASPNETCORE_URLS=http://+:5002
      - ConnectionStrings__Default=Host=postgres;Database=ticketing;...
      - ConnectionStrings__Redis=redis:6379
      - ConnectionStrings__Kafka=kafka:9092
    ports:
      - "5002:5002"
    depends_on:
      - postgres
      - redis
      - kafka

  ordering-service:
    build: ../services/ordering
    environment:
      - ASPNETCORE_URLS=http://+:5003
      - ConnectionStrings__Default=Host=postgres;Database=ticketing;...
      - Kafka__BootstrapServers=kafka:9092
    ports:
      - "5003:5003"
Service Discovery:
  • Docker Compose provides internal DNS (e.g., kafka, postgres, redis)
  • Services reference each other by container name
  • For Kubernetes deployment, consider using service mesh (Istio, Linkerd)

Deployment Strategies

Independent Deployment

Each service has its own:
  • Dockerfile
  • Build pipeline
  • Versioning
  • Rollback capability
Services can be deployed without coordinating releases.

Database Migrations

Each service manages its own migrations:
// Auto-apply migrations on startup
using (var scope = app.Services.CreateScope())
{
    var dbInit = scope.ServiceProvider
        .GetRequiredService<IDbInitializer>();
    await dbInit.InitializeAsync();
}
Uses EF Core schema-specific migrations:
__EFMigrationsHistory in bc_inventory
__EFMigrationsHistory in bc_ordering

Anti-Patterns to Avoid

Direct Database Access:Never query another service’s database directly. Use events or APIs.
// ❌ BAD: Ordering service queries Inventory database
var seat = await inventoryDbContext.Seats.FindAsync(seatId);

// ✅ GOOD: Ordering service uses event-sourced data
var reservation = _reservationStore.GetReservation(seatId);
Shared Domain Models:Don’t share domain entities between services. Use DTOs/events.
// ❌ BAD: Shared.dll with common entities
public class Reservation { ... } // Used by both Inventory and Ordering

// ✅ GOOD: Each service has its own model
// Inventory service
public class Reservation { ... }

// Ordering service
public class ReservationCreatedEvent { ... } // Event contract only

Event-Driven Architecture

Learn how Kafka enables service choreography

Hexagonal Architecture

Understand the internal structure of each service

CQRS Pattern

See how commands and queries are separated

System Architecture

View the complete architecture overview

Build docs developers (and LLMs) love