Skip to main content

Overview

captaind uses PostgreSQL as its primary data store for:
  • VTXOs: User virtual UTXOs and their state
  • Rounds: Payment round history and transactions
  • Virtual Transactions: Off-chain transaction data
  • Lightning: Invoice and payment tracking
  • Wallets: BDK wallet state and changesets
  • Integrations: Third-party API access control
The database schema is managed via migrations using the Refinery framework.

Schema Overview

Core Tables

VTXOs:
  • vtxo: Active VTXOs and their metadata
  • vtxo_history: Audit trail of VTXO changes
  • vtxo_pool: Pre-issued liquidity VTXOs
  • vtxo_mailbox: Mailbox delivery for VTXOs
  • arkoor_mailbox: Off-chain transfer mailbox
Rounds:
  • round: Completed payment rounds
  • round_participation: User participation records
  • round_part_input: Input VTXOs per participation
  • round_part_output: Output VTXO requests per participation
Transactions:
  • virtual_transaction: Off-chain transaction data
  • virtual_transaction_history: Transaction change history
  • bitcoin_transaction: On-chain transactions
  • offboards: Pending offboard (exit) transactions
Lightning:
  • lightning_node: Connected CLN nodes
  • lightning_invoice: Generated invoices
  • lightning_payment_attempt: Payment attempts
  • lightning_htlc_subscription: HTLC tracking for receives
  • History tables: *_history for audit trails
Wallets:
  • wallet_changeset: BDK wallet state changes
System:
  • captaind_block: Block height tracking
  • watchmand_block: Watchman block tracking
  • checkpoint_state: Monotonic ID generation
  • ephemeral_tweak: Temporary cosign keys
Integrations:
  • integration: Third-party integrations
  • integration_api_key: API keys for integrations
  • integration_token: Single-use tokens
  • integration_token_config: Token type configuration
  • History tables: *_history for audit trails

Key Relationships

Schema Details

VTXO Table

CREATE TABLE vtxo (
    id bigint PRIMARY KEY,
    vtxo_id text NOT NULL UNIQUE,
    vtxo bytea NOT NULL,  -- Serialized VTXO data
    expiry integer NOT NULL,  -- Block height
    oor_spent_txid text,  -- Off-chain spend
    spent_in_round bigint,  -- Round that spent this
    lightning_htlc_subscription_id bigint,  -- If locked for LN
    offboarded_in text,  -- Offboard tx ID
    vtxo_txid text,  -- Virtual transaction ID
    created_at timestamp with time zone NOT NULL,
    updated_at timestamp with time zone NOT NULL
);

-- Key indexes
CREATE UNIQUE INDEX vtxo_vtxo_id_uix ON vtxo(vtxo_id);
CREATE INDEX vtxo_spendable_ix ON vtxo(
    (oor_spent_txid IS NULL),
    (spent_in_round IS NULL),
    (offboarded_in IS NULL),
    vtxo_id
);
Spendable Condition: A VTXO is spendable when:
  • oor_spent_txid IS NULL (not spent off-chain)
  • spent_in_round IS NULL (not spent in a round)
  • offboarded_in IS NULL (not offboarded)

Round Table

CREATE TABLE round (
    id bigint PRIMARY KEY,
    seq bigint NOT NULL UNIQUE,  -- Round sequence number
    funding_txid text NOT NULL UNIQUE,
    funding_tx bytea NOT NULL,  -- Serialized transaction
    signed_tree bytea NOT NULL,  -- Serialized VTXO tree
    expiry integer NOT NULL,  -- Block height
    swept_at timestamp with time zone,  -- When swept by watchman
    created_at timestamp with time zone NOT NULL
);

CREATE INDEX round_expiry_ix ON round(
    expiry,
    (swept_at IS NULL),
    funding_txid
);

Virtual Transaction Table

CREATE TABLE virtual_transaction (
    txid text PRIMARY KEY,
    signed_tx bytea,  -- NULL if not signed yet
    is_funding boolean NOT NULL,  -- Is this a round funding tx?
    server_may_own_descendant_since timestamp with time zone,
    created_at timestamp without time zone NOT NULL,
    updated_at timestamp without time zone NOT NULL
);
server_may_own_descendant_since: Marks when server started owning outputs (for Lightning HTLCs).

Lightning Invoice Table

CREATE TABLE lightning_invoice (
    id bigint PRIMARY KEY,
    invoice text NOT NULL UNIQUE,
    payment_hash bytea NOT NULL UNIQUE,
    final_amount_msat bigint,  -- NULL if pending
    preimage bytea UNIQUE,  -- NULL until settled
    created_at timestamp with time zone NOT NULL,
    updated_at timestamp with time zone NOT NULL
);

Migrations

Migration System

Migrations are stored in server/src/database/migrations/ and applied automatically on startup:
migrations/
├── V0__init.sql                    # Initial schema
├── V1__board_swept.sql
├── V10__round_id_column_in_vtxo.sql
├── V11__round_seq.sql
├── V12__tokens.sql
├── V13__ephemeral_keys.sql
├── V14__drop_views.sql
├── V15__add_index_uniqueness.sql
├── V16__offchain_ln_receive.sql
├── V17__board_sweeps.sql
├── V18__ln_receive_htlc_cltv.sql
├── V19__hardcoded_captaind_api_keys.sql
├── V20__unreleased.sql
├── V21__blinded_mailbox.sql
├── V22__hark.sql
├── V23__best_block.sql
└── V24__offboard_swaps.sql
Naming Convention: V{version}__{description}.sql

Applying Migrations

Migrations are applied automatically when captaind starts. To manually check migration status:
# View migration history
psql bark-server-db -c "SELECT * FROM refinery_schema_history ORDER BY version;"

Rolling Back Migrations

No Automatic Rollback: Refinery doesn’t support automatic rollbacks. Manual intervention required.
To rollback:
  1. Create a new forward migration that undoes the changes
  2. Or restore from a database backup

Database Maintenance

Vacuum and Analyze

Regular maintenance for performance:
# Full vacuum (reclaims space)
psql bark-server-db -c "VACUUM FULL;"

# Analyze (updates statistics)
psql bark-server-db -c "ANALYZE;"

# Vacuum analyze (both)
psql bark-server-db -c "VACUUM ANALYZE;"
Recommended Schedule:
  • VACUUM ANALYZE: Daily
  • VACUUM FULL: Monthly (requires table lock)

Reindex

Rebuilds all indexes:
psql bark-server-db -c "REINDEX DATABASE \"bark-server-db\";"
When to Reindex:
  • After major data changes
  • If query performance degrades
  • After VACUUM FULL

Table Statistics

Check table sizes:
SELECT 
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size,
    pg_total_relation_size(schemaname||'.'||tablename) AS size_bytes
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY size_bytes DESC;
Expected output:
 schemaname |          tablename          |  size   
------------+-----------------------------+---------
 public     | vtxo                        | 150 MB
 public     | virtual_transaction         | 80 MB
 public     | vtxo_history                | 60 MB
 public     | round                       | 40 MB

Backup and Restore

Backup Database

Full dump:
pg_dump -U postgres -d bark-server-db -F c -f backup.dump
SQL format (human-readable):
pg_dump -U postgres -d bark-server-db -f backup.sql
Compressed:
pg_dump -U postgres -d bark-server-db | gzip > backup.sql.gz

Restore Database

From custom format:
pg_restore -U postgres -d bark-server-db-new backup.dump
From SQL:
psql -U postgres -d bark-server-db-new -f backup.sql
From compressed:
gunzip -c backup.sql.gz | psql -U postgres -d bark-server-db-new

Continuous Archiving (WAL)

For point-in-time recovery, enable WAL archiving in postgresql.conf:
wal_level = replica
archive_mode = on
archive_command = 'cp %p /path/to/wal_archive/%f'
See PostgreSQL documentation for details.

Monitoring

Connection Pool Metrics

Metrics exposed via OpenTelemetry:
  • bark_postgres_connections: Total connections
  • bark_postgres_idle_connections: Idle in pool
  • bark_postgres_connections_created: Created count
  • bark_postgres_connections_closed_*: Close reasons

Query Performance

Enable query logging in postgresql.conf:
log_statement = 'all'
log_duration = on
log_min_duration_statement = 1000  # Log queries >1s
View slow queries:
tail -f /var/log/postgresql/postgresql-13-main.log | grep duration

Active Connections

SELECT 
    pid,
    usename,
    application_name,
    state,
    query,
    query_start
FROM pg_stat_activity
WHERE datname = 'bark-server-db';

Lock Monitoring

SELECT 
    locktype,
    relation::regclass,
    mode,
    granted,
    pid
FROM pg_locks
WHERE NOT granted;

Troubleshooting

Connection Pool Exhausted

Symptom: Failed to get database connection Cause: All connections in use Solution:
# Increase pool size in captaind.toml
[postgres]
max_connections = 20  # Was 10
Also check PostgreSQL max connections:
SHOW max_connections;

Slow Queries

Symptom: Database operations taking >1 second Diagnosis:
-- Enable query stats extension
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- View slowest queries
SELECT 
    query,
    mean_exec_time,
    calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
Solutions:
  • Add missing indexes
  • Run VACUUM ANALYZE
  • Optimize query logic
  • Upgrade hardware (SSD)

Disk Space Issues

Symptom: no space left on device Check Usage:
du -sh /var/lib/postgresql/13/main/
Solutions:
  • Run VACUUM FULL to reclaim space
  • Archive old data (not yet implemented)
  • Increase disk size
  • Delete old WAL files (if archiving enabled)

Migration Failures

Symptom: Server fails to start with migration error Diagnosis:
SELECT * FROM refinery_schema_history ORDER BY version DESC LIMIT 5;
Solution:
  1. Review failed migration in migrations/ directory
  2. Fix issue manually or restore from backup
  3. Update refinery_schema_history version
  4. Restart captaind

Best Practices

Schedule automated backups:
# Cron job for daily backups
0 2 * * * pg_dump -U postgres bark-server-db | gzip > /backups/bark-$(date +\%Y\%m\%d).sql.gz
Retention policy:
  • Daily backups: 7 days
  • Weekly backups: 4 weeks
  • Monthly backups: 12 months
Alert on:
  • Connection pool usage >80%
  • Table size growth rate
  • Slow query count
  • Replication lag (if applicable)
  • Monitor index usage:
    SELECT * FROM pg_stat_user_indexes WHERE idx_scan = 0;
    
  • Drop unused indexes
  • Add indexes for frequent queries
  • Enable autovacuum (on by default)
  • Tune autovacuum settings for workload
  • Manual VACUUM ANALYZE during low traffic
  • Use strong PostgreSQL passwords
  • Limit network access (firewall)
  • Enable SSL for remote connections
  • Regular PostgreSQL version updates

Schema Export

To export the complete schema:
pg_dump -U postgres -d bark-server-db --schema-only -f schema.sql
The full schema is also available in the source repository: server/schema.sql

Build docs developers (and LLMs) love