Skip to main content

Overview

Blnk uses PostgreSQL 16 as its primary data store. This guide covers production database setup, migrations, connection pooling, and backup strategies.

Requirements

  • PostgreSQL 16 or later
  • Minimum 4GB RAM (8GB+ recommended for production)
  • SSD storage for optimal performance
  • Regular backup storage (separate from database storage)

Installation

postgres:
  image: postgres:16
  environment:
    POSTGRES_USER: blnk
    POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    POSTGRES_DB: blnk
    TZ: Etc/UTC
  volumes:
    - pg_data:/var/lib/postgresql/data
  healthcheck:
    test: ["CMD", "pg_isready", "-U", "blnk"]
    interval: 10s
    timeout: 5s
    retries: 5

Ubuntu/Debian

# Add PostgreSQL repository
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

# Install PostgreSQL 16
sudo apt-get update
sudo apt-get install -y postgresql-16 postgresql-client-16

macOS

brew install postgresql@16
brew services start postgresql@16

Database Schema

Blnk uses a dedicated blnk schema with the following core tables:

Core Tables

ledgers - Financial ledgers
CREATE TABLE blnk.ledgers (
    id         SERIAL PRIMARY KEY,
    name       TEXT,
    ledger_id  TEXT NOT NULL UNIQUE,
    created_at TIMESTAMP NOT NULL DEFAULT NOW(),
    meta_data  JSONB
);
balances - Account balances
CREATE TABLE blnk.balances (
    id                  SERIAL PRIMARY KEY,
    balance_id          TEXT NOT NULL UNIQUE,
    indicator           TEXT,
    balance             BIGINT NOT NULL,
    credit_balance      BIGINT NOT NULL,
    debit_balance       BIGINT NOT NULL,
    currency            TEXT NOT NULL,
    currency_multiplier BIGINT NOT NULL,
    ledger_id           TEXT NOT NULL REFERENCES blnk.ledgers(ledger_id),
    identity_id         TEXT REFERENCES blnk.identity(identity_id),
    created_at          TIMESTAMP NOT NULL DEFAULT NOW(),
    modification_ref    TEXT,
    meta_data           JSONB
);
transactions - Financial transactions
CREATE TABLE blnk.transactions (
    id             SERIAL PRIMARY KEY,
    transaction_id TEXT NOT NULL UNIQUE,
    source         TEXT,
    destination    TEXT,
    description    TEXT,
    reference      TEXT,
    amount         BIGINT,
    currency       TEXT,
    status         TEXT,
    hash           TEXT,
    created_at     TIMESTAMP NOT NULL DEFAULT NOW(),
    scheduled_for  TIMESTAMP,
    meta_data      JSONB,
    CONSTRAINT fk_source_balance FOREIGN KEY (source) REFERENCES blnk.balances(balance_id),
    CONSTRAINT fk_destination_balance FOREIGN KEY (destination) REFERENCES blnk.balances(balance_id)
);
identity - User/entity information
CREATE TABLE blnk.identity (
    id                SERIAL PRIMARY KEY,
    identity_id       TEXT NOT NULL UNIQUE,
    first_name        TEXT NOT NULL,
    last_name         TEXT NOT NULL,
    other_names       TEXT,
    gender            TEXT,
    dob               DATE,
    email_address     TEXT,
    phone_number      TEXT,
    nationality       TEXT,
    street            TEXT,
    country           TEXT,
    state             TEXT,
    organization_name TEXT,
    category          TEXT,
    identity_type     TEXT,
    post_code         TEXT,
    city              TEXT,
    created_at        TIMESTAMP NOT NULL DEFAULT NOW(),
    meta_data         JSONB
);
accounts - External account mappings
CREATE TABLE blnk.accounts (
    id          SERIAL PRIMARY KEY,
    account_id  TEXT NOT NULL UNIQUE,
    name        TEXT NOT NULL,
    number      TEXT NOT NULL UNIQUE,
    bank_name   TEXT NOT NULL,
    currency    TEXT NOT NULL,
    created_at  TIMESTAMP NOT NULL DEFAULT NOW(),
    ledger_id   TEXT NOT NULL REFERENCES blnk.ledgers(ledger_id),
    identity_id TEXT NOT NULL REFERENCES blnk.identity(identity_id),
    balance_id  TEXT NOT NULL REFERENCES blnk.balances(balance_id),
    meta_data   JSONB
);
balance_monitors - Balance threshold alerts
CREATE TABLE blnk.balance_monitors (
    id            SERIAL PRIMARY KEY,
    monitor_id    TEXT NOT NULL UNIQUE,
    balance_id    TEXT NOT NULL REFERENCES blnk.balances(balance_id),
    field         TEXT NOT NULL CHECK (field IN ('debit_balance', 'credit_balance', 'balance')),
    operator      TEXT NOT NULL CHECK (operator IN ('>', '<', '>=', '<=', '=')),
    value         BIGINT NOT NULL,
    description   TEXT,
    call_back_url TEXT,
    created_at    TIMESTAMP NOT NULL DEFAULT NOW()
);

Indexes

Critical indexes for performance:
CREATE INDEX idx_transactions_reference ON blnk.transactions(reference);
CREATE INDEX idx_balances_indicator ON blnk.balances(indicator);
CREATE UNIQUE INDEX idx_unique_indicator_on_non_nulls ON blnk.balances(indicator) WHERE indicator IS NOT NULL;

Database Migrations

Blnk includes built-in migration support using SQL files in the /sql directory.

Running Migrations

Using Blnk CLI:
blnk migrate up
Using Docker:
docker-compose exec server blnk migrate up
Separate Migration Container:
migration:
  image: jerryenebeli/blnk:0.13.2
  entrypoint: ["blnk", "migrate", "up"]
  restart: on-failure
  depends_on:
    - postgres
  volumes:
    - ./blnk.json:/blnk.json

Migration Files

Migrations are timestamp-based SQL files:
sql/
├── 1708676327.sql  # Initial schema
├── 1711640389.sql  # Add indexes
├── 1712395973.sql  # Add balance monitors
└── ...
Each migration file uses +migrate Up and +migrate Down directives:
-- +migrate Up
CREATE TABLE blnk.new_table (
    id SERIAL PRIMARY KEY
);

-- +migrate Down
DROP TABLE IF EXISTS blnk.new_table;

Connection Configuration

Connection String Format

postgres://username:password@host:port/database?sslmode=require

Configuration in blnk.json

{
  "data_source": {
    "dns": "postgres://blnk:password@localhost:5432/blnk?sslmode=require",
    "max_open_conns": 25,
    "max_idle_conns": 10,
    "conn_max_lifetime": "30m",
    "conn_max_idle_time": "5m"
  }
}

Environment Variables

Alternatively, use environment variables:
export BLNK_DATA_SOURCE_DNS="postgres://blnk:password@localhost:5432/blnk?sslmode=require"
export BLNK_DATABASE_MAX_OPEN_CONNS=25
export BLNK_DATABASE_MAX_IDLE_CONNS=10
export BLNK_DATABASE_CONN_MAX_LIFETIME=30m
export BLNK_DATABASE_CONN_MAX_IDLE_TIME=5m

Connection Pool Tuning

Default Settings

Blnk’s default connection pool settings (from config/config.go:72-77):
defaultDatabase = DataSourceConfig{
    MaxOpenConns:    25,
    MaxIdleConns:    10,
    ConnMaxLifetime: 30 * time.Minute,
    ConnMaxIdleTime: 5 * time.Minute,
}

Production Recommendations

High-traffic deployment:
{
  "data_source": {
    "max_open_conns": 100,
    "max_idle_conns": 25,
    "conn_max_lifetime": "1h",
    "conn_max_idle_time": "10m"
  }
}
Low-resource environment:
{
  "data_source": {
    "max_open_conns": 10,
    "max_idle_conns": 5,
    "conn_max_lifetime": "15m",
    "conn_max_idle_time": "3m"
  }
}

Connection Pool Formula

max_connections = ((core_count * 2) + effective_spindle_count)
For a 4-core server with SSD:
max_connections = (4 * 2) + 1 = 9 ≈ 10-25

PostgreSQL Configuration

postgresql.conf Settings

For production workloads, tune PostgreSQL:
# Memory settings
shared_buffers = 2GB                  # 25% of RAM
effective_cache_size = 6GB            # 75% of RAM
work_mem = 64MB                       # RAM / max_connections
maintenance_work_mem = 512MB

# Checkpoint settings
wal_buffers = 16MB
checkpoint_completion_target = 0.9
max_wal_size = 2GB
min_wal_size = 1GB

# Connection settings
max_connections = 100

# Query planner
random_page_cost = 1.1                # For SSD
effective_io_concurrency = 200        # For SSD

# Logging
log_min_duration_statement = 1000     # Log queries > 1s
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '

Apply Configuration

sudo systemctl restart postgresql

SSL/TLS Configuration

Enable SSL in PostgreSQL

  1. Generate SSL certificates:
openssl req -new -x509 -days 365 -nodes -text -out server.crt \
  -keyout server.key -subj "/CN=blnk.example.com"
chmod 600 server.key
  1. Update postgresql.conf:
ssl = on
ssl_cert_file = '/path/to/server.crt'
ssl_key_file = '/path/to/server.key'
  1. Update connection string:
postgres://blnk:password@localhost:5432/blnk?sslmode=require

SSL Modes

  • disable - No SSL (not recommended for production)
  • require - SSL required, no certificate verification
  • verify-ca - SSL required, verify CA
  • verify-full - SSL required, verify CA and hostname

Backup Strategies

pg_dump Backup

Full backup:
pg_dump -h localhost -U blnk -d blnk -F c -f blnk_backup_$(date +%Y%m%d).dump
Schema only:
pg_dump -h localhost -U blnk -d blnk --schema-only > schema.sql
Data only:
pg_dump -h localhost -U blnk -d blnk --data-only > data.sql

Automated Backup Script

#!/bin/bash
BACKUP_DIR="/backups/postgresql"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=7

# Create backup
pg_dump -h localhost -U blnk -d blnk -F c -f "$BACKUP_DIR/blnk_$DATE.dump"

# Compress
gzip "$BACKUP_DIR/blnk_$DATE.dump"

# Remove old backups
find "$BACKUP_DIR" -name "blnk_*.dump.gz" -mtime +$RETENTION_DAYS -delete

# Upload to S3 (optional)
# aws s3 cp "$BACKUP_DIR/blnk_$DATE.dump.gz" s3://your-bucket/backups/

Continuous Archiving (WAL)

For point-in-time recovery, enable WAL archiving:
# postgresql.conf
wal_level = replica
archive_mode = on
archive_command = 'test ! -f /mnt/archive/%f && cp %p /mnt/archive/%f'

Restore from Backup

# Drop and recreate database
psql -U postgres -c "DROP DATABASE blnk;"
psql -U postgres -c "CREATE DATABASE blnk;"

# Restore from dump
pg_restore -h localhost -U blnk -d blnk blnk_backup_20260304.dump

Monitoring

Health Checks

Blnk’s health endpoint checks database connectivity (cmd/server.go:158-180):
curl http://localhost:5001/health
Response:
{"status": "UP"}

PostgreSQL Statistics

-- Connection count
SELECT count(*) FROM pg_stat_activity;

-- Active queries
SELECT pid, usename, application_name, state, query 
FROM pg_stat_activity 
WHERE state != 'idle';

-- Database size
SELECT pg_size_pretty(pg_database_size('blnk'));

-- Table sizes
SELECT schemaname, tablename, 
       pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'blnk'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

Slow Query Log

Enable slow query logging:
log_min_duration_statement = 1000  # Log queries > 1s
View slow queries:
grep "duration:" /var/log/postgresql/postgresql-16-main.log | sort -t: -k2 -n

Performance Optimization

Vacuum and Analyze

-- Vacuum and analyze all tables
VACUUM ANALYZE;

-- Specific table
VACUUM ANALYZE blnk.transactions;

Autovacuum Configuration

autovacuum = on
autovacuum_max_workers = 3
autovacuum_naptime = 1min
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05

Query Optimization

Use EXPLAIN ANALYZE:
EXPLAIN ANALYZE
SELECT * FROM blnk.transactions WHERE reference = 'TXN123';

Replication Setup

Primary Configuration

# postgresql.conf on primary
wal_level = replica
max_wal_senders = 10
wal_keep_size = 1GB

Standby Configuration

# Create base backup
pg_basebackup -h primary-host -D /var/lib/postgresql/16/main -U replication -P

# Create standby.signal
touch /var/lib/postgresql/16/main/standby.signal
# postgresql.conf on standby
primary_conninfo = 'host=primary-host port=5432 user=replication password=secret'

Troubleshooting

Connection Refused

Check if PostgreSQL is running:
sudo systemctl status postgresql
Check pg_hba.conf:
host    blnk    blnk    0.0.0.0/0    md5

Too Many Connections

Increase max_connections or reduce connection pool:
max_connections = 200

Slow Queries

Add missing indexes:
CREATE INDEX idx_transactions_created_at ON blnk.transactions(created_at);

Next Steps

Build docs developers (and LLMs) love