Skip to main content

Overview

The Product Distribution Dashboard uses PostgreSQL 16 as its database with Flyway for schema migrations. The application supports different database configurations for development and production environments.

PostgreSQL Setup

Development Environment

For local development, PostgreSQL runs in Docker via docker-compose:
postgres:
  image: postgres:16-alpine
  environment:
    POSTGRES_DB: product_distribution_db
    POSTGRES_USER: product_distribution
    POSTGRES_PASSWORD: product_distribution
  ports:
    - "5432:5432"

Connection Settings

Development Profile

spring.datasource.url=${SPRING_DATASOURCE_URL}
spring.datasource.username=${SPRING_DATASOURCE_USERNAME}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD}
spring.datasource.driverClassName=org.postgresql.Driver
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
Environment Variables:
SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/product_distribution_db
SPRING_DATASOURCE_USERNAME=product_distribution
SPRING_DATASOURCE_PASSWORD=product_distribution

Production Profile

spring.datasource.url=jdbc:postgresql://${DATABASE_HOST}:${DATABASE_PORT}/${DATABASE_NAME}?sslmode=require&reWriteBatchedInserts=true
spring.datasource.username=${DATABASE_USERNAME}
spring.datasource.password=${DATABASE_PASSWORD}
Environment Variables:
DATABASE_HOST=prod-postgres.example.com
DATABASE_PORT=5432
DATABASE_NAME=product_distribution_db
DATABASE_USERNAME=prod_user
DATABASE_PASSWORD=secure_password
Production Features:
  • SSL encryption enabled (sslmode=require)
  • Batch insert optimization (reWriteBatchedInserts=true)

HikariCP Connection Pooling

The production profile uses HikariCP for connection pooling with the following configuration:
spring.datasource.hikari.maximum-pool-size=10
spring.datasource.hikari.minimum-idle=5
spring.datasource.hikari.connection-timeout=30000
spring.datasource.hikari.idle-timeout=600000
spring.datasource.hikari.max-lifetime=1800000

Connection Pool Settings

SettingValueDescription
maximum-pool-size10Maximum number of connections in the pool
minimum-idle5Minimum number of idle connections maintained
connection-timeout30000msMaximum time to wait for a connection (30s)
idle-timeout600000msMaximum idle time before connection is retired (10m)
max-lifetime1800000msMaximum lifetime of a connection (30m)

Performance Optimization

The production profile includes Hibernate batch processing configuration:
spring.jpa.properties.hibernate.jdbc.batch_size=100
spring.jpa.properties.hibernate.order_inserts=true
spring.jpa.properties.hibernate.order_updates=true
spring.jpa.properties.hibernate.connection.provider_disables_autocommit=false
  • Batch Size: 100 operations per batch
  • Ordered Operations: Inserts and updates are ordered for better batching efficiency
  • Autocommit: Not disabled to ensure transactional consistency

Flyway Migrations

The application uses Flyway for database schema versioning and migrations. JPA is configured to validate the schema rather than auto-generate it:
spring.jpa.hibernate.ddl-auto=validate

Migration Files

Migrations are located in src/main/resources/db/migration/:
  1. V1__Create_products_table.sql - Products and sizes
  2. V2__Create_stores_table.sql - Store locations and capacity
  3. V3__Create_warehouses_table.sql - Warehouse locations
  4. V4__Create_product_items_table.sql - Inventory items
  5. V5__Create_stock_assignments_table.sql - Distribution assignments
  6. V6__Create_unfulfilled_demands_table.sql - Unfulfilled demand tracking

Database Schema

Products

CREATE TABLE products (
    id VARCHAR(255) PRIMARY KEY,
    brand_id VARCHAR(255) NOT NULL
);

CREATE TABLE product_sizes (
    product_id VARCHAR(255) NOT NULL,
    sizes VARCHAR(255) NOT NULL,
    PRIMARY KEY (product_id, sizes),
    FOREIGN KEY (product_id) REFERENCES products(id) ON DELETE CASCADE
);
Stores product information and available sizes.

Stores

CREATE TABLE stores (
    id VARCHAR(255) PRIMARY KEY,
    latitude DOUBLE PRECISION,
    longitude DOUBLE PRECISION,
    country VARCHAR(255),
    max_stock_capacity INTEGER,
    expected_return_rate DOUBLE PRECISION,
    remaining_capacity INTEGER
);
Stores retail location data, capacity limits, and return rate expectations.

Warehouses

CREATE TABLE warehouses (
    id VARCHAR(255) PRIMARY KEY,
    latitude DOUBLE PRECISION,
    longitude DOUBLE PRECISION,
    country VARCHAR(255)
);
Stores warehouse location information for distribution calculations.

Product Items

CREATE TABLE product_items (
    id UUID PRIMARY KEY,
    product_id VARCHAR(255) NOT NULL,
    size VARCHAR(255) NOT NULL,
    quantity INTEGER NOT NULL,
    store_id VARCHAR(255),
    warehouse_id VARCHAR(255),
    FOREIGN KEY (store_id) REFERENCES stores(id) ON DELETE CASCADE,
    FOREIGN KEY (warehouse_id) REFERENCES warehouses(id) ON DELETE CASCADE
);

CREATE INDEX idx_product_items_store_id ON product_items(store_id);
CREATE INDEX idx_product_items_warehouse_id ON product_items(warehouse_id);
CREATE INDEX idx_product_items_product_size ON product_items(product_id, size);
Tracks individual inventory items at stores and warehouses.

Stock Assignments

CREATE TABLE stock_assignments (
    id UUID PRIMARY KEY,
    store_id VARCHAR(255) NOT NULL,
    warehouse_id VARCHAR(255) NOT NULL,
    product_id VARCHAR(255) NOT NULL,
    size VARCHAR(255) NOT NULL,
    quantity INTEGER NOT NULL,
    distance_km DOUBLE PRECISION,
    FOREIGN KEY (store_id) REFERENCES stores(id) ON DELETE CASCADE,
    FOREIGN KEY (warehouse_id) REFERENCES warehouses(id) ON DELETE CASCADE
);

CREATE INDEX idx_stock_assignments_store_id ON stock_assignments(store_id);
CREATE INDEX idx_stock_assignments_warehouse_id ON stock_assignments(warehouse_id);
CREATE INDEX idx_stock_assignments_product_id ON stock_assignments(product_id);
CREATE INDEX idx_stock_assignments_warehouse_store_product ON stock_assignments(warehouse_id, store_id, product_id);
CREATE INDEX idx_stock_assignments_warehouse_store_distance ON stock_assignments(warehouse_id, store_id, distance_km);
Records distribution assignments from warehouses to stores with distance tracking.

Unfulfilled Demands

CREATE TABLE unfulfilled_demands (
    id UUID PRIMARY KEY,
    store_id VARCHAR(255) NOT NULL,
    product_id VARCHAR(255) NOT NULL,
    size VARCHAR(255) NOT NULL,
    quantity_missing INTEGER NOT NULL,
    reason VARCHAR(50) NOT NULL,
    FOREIGN KEY (store_id) REFERENCES stores(id) ON DELETE CASCADE
);

CREATE INDEX idx_unfulfilled_demands_store_id ON unfulfilled_demands(store_id);
CREATE INDEX idx_unfulfilled_demands_product_id ON unfulfilled_demands(product_id);
CREATE INDEX idx_unfulfilled_demands_store_product ON unfulfilled_demands(store_id, product_id);
Tracks demand that could not be fulfilled due to insufficient stock or capacity constraints.

Database Indexing Strategy

The schema includes strategic indexes to optimize common queries:
  • Single-column indexes: Fast lookups by store, warehouse, or product ID
  • Composite indexes: Optimize multi-column WHERE clauses and JOIN operations
  • Distance indexes: Enable efficient distance-based sorting and filtering

PgAdmin

The Docker Compose setup includes PgAdmin for database administration:
pgadmin:
  image: dpage/pgadmin4
  environment:
    PGADMIN_DEFAULT_EMAIL: [email protected]
    PGADMIN_DEFAULT_PASSWORD: admin
  ports:
    - "5050:80"
Access PgAdmin at http://localhost:5050

Backup and Maintenance

Backup Recommendations

  1. Automated Backups: Schedule regular PostgreSQL dumps
    pg_dump -h localhost -U product_distribution -d product_distribution_db > backup.sql
    
  2. Point-in-Time Recovery: Enable WAL archiving for production
  3. Backup Retention: Keep at least 7 days of daily backups

Maintenance Tasks

  1. Vacuum: Run VACUUM ANALYZE periodically to optimize query performance
    VACUUM ANALYZE;
    
  2. Index Maintenance: Monitor index usage and rebuild fragmented indexes
    REINDEX TABLE stock_assignments;
    
  3. Connection Monitoring: Monitor active connections and pool utilization
  4. Log Analysis: Review PostgreSQL logs for slow queries and errors

Health Checks

The Docker Compose configuration includes a health check for PostgreSQL:
healthcheck:
  test: ["CMD-SHELL", "pg_isready -U product_distribution -d product_distribution_db"]
  interval: 10s
  timeout: 5s
  retries: 5
The backend service waits for the database to be healthy before starting.

Troubleshooting

Connection Refused

Problem: Application cannot connect to database Solutions:
  • Verify PostgreSQL is running: docker ps
  • Check environment variables are set correctly
  • Ensure database host and port are reachable
  • Check firewall rules for port 5432

Migration Failures

Problem: Flyway migration fails on startup Solutions:
  • Check Flyway migration history: SELECT * FROM flyway_schema_history;
  • Verify migration file checksums match
  • Ensure database user has DDL permissions
  • Review application logs for specific error messages

Connection Pool Exhaustion

Problem: “Connection pool exhausted” errors Solutions:
  • Increase maximum-pool-size in production configuration
  • Check for connection leaks (unclosed connections)
  • Monitor slow queries that hold connections
  • Review application logs for long-running transactions

Performance Issues

Problem: Slow query performance Solutions:
  • Run EXPLAIN ANALYZE on slow queries
  • Verify indexes are being used
  • Check for missing indexes on frequently queried columns
  • Consider increasing batch_size for bulk operations
  • Monitor connection pool metrics

Build docs developers (and LLMs) love