Overview
PostgreSQL is the primary database for Umami, providing reliable, ACID-compliant storage for analytics data. This guide covers installation, configuration, and optimization.
Umami requires PostgreSQL version 12.14 or higher. PostgreSQL 15 is recommended for best performance.
Database Schema
Umami’s database schema is managed by Prisma and includes the following core tables:
User Management
Analytics Data
Additional Features
user : User accounts and authentication
team : Team organization
team_user : Team membership relationships
website : Website configurations
session : Visitor sessions with device/location data
website_event : Page views and custom events
event_data : Custom event properties
session_data : Custom session properties
report : Saved reports
segment : User segments
revenue : Revenue tracking data
link : Short links
pixel : Tracking pixels
Installation
Using Docker (Recommended)
The easiest way to run PostgreSQL is with Docker:
services :
db :
image : postgres:15-alpine
environment :
POSTGRES_DB : umami
POSTGRES_USER : umami
POSTGRES_PASSWORD : your-secure-password
volumes :
- postgres-data:/var/lib/postgresql/data
ports :
- "5432:5432"
restart : always
healthcheck :
test : [ "CMD-SHELL" , "pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}" ]
interval : 5s
timeout : 5s
retries : 5
volumes :
postgres-data :
Ubuntu/Debian
# Install PostgreSQL
sudo apt update
sudo apt install postgresql postgresql-contrib
# Start PostgreSQL service
sudo systemctl start postgresql
sudo systemctl enable postgresql
# Create database and user
sudo -u postgres psql -c "CREATE DATABASE umami;"
sudo -u postgres psql -c "CREATE USER umami WITH PASSWORD 'your-password';"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE umami TO umami;"
macOS (Homebrew)
# Install PostgreSQL
brew install postgresql@15
brew services start postgresql@15
# Create database
createdb umami
Connection Configuration
The DATABASE_URL environment variable uses the following format:
DATABASE_URL = postgresql://username:password@hostname:port/database? schema = public
Local Connection
Docker Connection
Remote Connection
SSL Connection
DATABASE_URL = postgresql://umami:password@localhost:5432/umami
# From host to Docker container
DATABASE_URL = postgresql://umami:password@localhost:5432/umami
# Between Docker containers
DATABASE_URL = postgresql://umami:password@db:5432/umami
DATABASE_URL = postgresql://umami:password@host:5432/umami? sslmode = require
SSL/TLS Configuration
For secure connections, add SSL parameters:
# Require SSL
DATABASE_URL = postgresql://user:pass@host:5432/umami? sslmode = require
# Verify CA certificate
DATABASE_URL = postgresql://user:pass@host:5432/umami? sslmode = verify-ca & sslcert = /path/to/cert.pem
# Full verification
DATABASE_URL = postgresql://user:pass@host:5432/umami? sslmode = verify-full & sslrootcert = /path/to/ca.pem
Schema Management with Prisma
Umami uses Prisma for database schema management and migrations.
Initial Setup
When you first build Umami, migrations run automatically:
# Build process includes migrations
pnpm build
Manual Migration
Run migrations manually when needed:
# Deploy migrations
pnpm prisma migrate deploy
# Generate Prisma client
pnpm prisma generate
View Migration Status
# Check migration status
pnpm prisma migrate status
Migration Files
Migration files are located in /prisma/migrations/:
01_init/migration.sql - Initial schema
02_report_schema_session_data/migration.sql - Reports and session data
03_metric_performance_index/migration.sql - Performance indexes
04_team_redesign/migration.sql - Team features
05_add_visit_id/migration.sql - Visit tracking
06_session_data/migration.sql - Session data enhancements
07_add_tag/migration.sql - Event tags
08_add_utm_clid/migration.sql - UTM parameters and click IDs
09_update_hostname_region/migration.sql - Hostname and region tracking
10_add_distinct_id/migration.sql - User identification
Database Configuration
Optimize PostgreSQL settings for analytics workloads:
# Memory settings
shared_buffers = 256MB # 25 % of RAM for small instances
effective_cache_size = 1GB # 50 - 75 % of RAM
work_mem = 16MB # For sorting and joins
maintenance_work_mem = 128MB # For VACUUM and CREATE INDEX
# Write - ahead log
wal_buffers = 16MB
max_wal_size = 2GB
min_wal_size = 1GB
# Query planner
random_page_cost = 1 . 1 # For SSD storage
effective_io_concurrency = 200 # For SSD storage
# Autovacuum settings
autovacuum = on
autovacuum_max_workers = 3
autovacuum_naptime = 10s
Restart PostgreSQL after modifying postgresql.conf: sudo systemctl restart postgresql
Indexes
Umami creates indexes automatically during migration. Key indexes include:
-- Session indexes
CREATE INDEX idx_session_created_at ON session (created_at);
CREATE INDEX idx_session_website_id ON session (website_id);
CREATE INDEX idx_session_website_created ON session (website_id, created_at);
-- Event indexes
CREATE INDEX idx_website_event_created_at ON website_event(created_at);
CREATE INDEX idx_website_event_website_id ON website_event(website_id);
CREATE INDEX idx_website_event_session_id ON website_event(session_id);
CREATE INDEX idx_website_event_url_path ON website_event(website_id, created_at, url_path);
-- Event data indexes
CREATE INDEX idx_event_data_website_id ON event_data(website_id);
CREATE INDEX idx_event_data_created_key ON event_data(website_id, created_at, data_key);
Partitioning (Advanced)
For high-volume sites, consider table partitioning:
-- Example: Partition website_event by month
CREATE TABLE website_event_2026_03 PARTITION OF website_event
FOR VALUES FROM ( '2026-03-01' ) TO ( '2026-04-01' );
-- Create future partitions
CREATE TABLE website_event_2026_04 PARTITION OF website_event
FOR VALUES FROM ( '2026-04-01' ) TO ( '2026-05-01' );
Partitioning requires modifying the schema and is recommended only for very high-traffic sites.
Connection Pooling
Using Prisma’s Built-in Pooling
Prisma handles connection pooling automatically. Configure pool size:
DATABASE_URL = postgresql://user:pass@host:5432/umami? connection_limit = 10
Using PgBouncer
For multiple Umami instances, use PgBouncer:
[databases]
umami = host =localhost port =5432 dbname =umami
[pgbouncer]
listen_port = 6432
listen_addr = *
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 100
default_pool_size = 20
Update DATABASE_URL to use PgBouncer:
DATABASE_URL = postgresql://umami:password@localhost:6432/umami? pgbouncer = true
Backup and Restore
Database Backup
pg_dump
Docker
Automated Backup
# Create backup
pg_dump -U umami -h localhost umami > umami_backup.sql
# Compressed backup
pg_dump -U umami -h localhost umami | gzip > umami_backup.sql.gz
# Custom format (recommended)
pg_dump -U umami -h localhost -Fc umami > umami_backup.dump
# Backup from Docker container
docker compose exec db pg_dump -U umami umami > backup.sql
# Compressed backup
docker compose exec db pg_dump -U umami umami | gzip > backup.sql.gz
#!/bin/bash
BACKUP_DIR = /backups
DATE = $( date +%Y%m%d-%H%M%S )
pg_dump -U umami -h localhost -Fc umami > \
$BACKUP_DIR /umami- $DATE .dump
# Keep only last 7 days
find $BACKUP_DIR -name "umami-*.dump" -mtime +7 -delete
Schedule with cron: 0 2 * * * /path/to/backup.sh
Restore Database
# From SQL file
psql -U umami -h localhost umami < umami_backup.sql
# From compressed file
gunzip -c umami_backup.sql.gz | psql -U umami -h localhost umami
# From custom format
pg_restore -U umami -h localhost -d umami umami_backup.dump
# Docker restore
docker compose exec -T db psql -U umami umami < backup.sql
Monitoring
Check Database Size
-- Database size
SELECT pg_size_pretty(pg_database_size( 'umami' ));
-- Table sizes
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC ;
Active Connections
-- Current connections
SELECT count ( * ) FROM pg_stat_activity WHERE datname = 'umami' ;
-- Connection details
SELECT
pid,
usename,
application_name,
client_addr,
state ,
query
FROM pg_stat_activity
WHERE datname = 'umami' ;
Slow Queries
-- Enable query logging in postgresql.conf
log_min_duration_statement = 1000 # Log queries slower than 1s
-- View slow queries
SELECT
query,
calls,
total_time,
mean_time,
max_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10 ;
Enable pg_stat_statements extension: CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
Maintenance
Vacuum
Regular vacuuming keeps the database performant:
-- Manual vacuum
VACUUM ANALYZE;
-- Vacuum specific table
VACUUM ANALYZE website_event;
-- Full vacuum (requires exclusive lock)
VACUUM FULL;
Reindex
Reindex to optimize index performance:
-- Reindex database
REINDEX DATABASE umami;
-- Reindex table
REINDEX TABLE website_event;
Data Retention
Implement data retention policies:
-- Delete old events (older than 1 year)
DELETE FROM website_event
WHERE created_at < NOW () - INTERVAL '1 year' ;
-- Delete associated event data
DELETE FROM event_data
WHERE created_at < NOW () - INTERVAL '1 year' ;
-- Vacuum after large deletes
VACUUM ANALYZE website_event;
VACUUM ANALYZE event_data;
Always backup before running delete operations!
Troubleshooting
Check if PostgreSQL is running: sudo systemctl status postgresql
Verify PostgreSQL is listening: sudo netstat -plnt | grep 5432
Check pg_hba.conf for connection permissions: sudo nano /etc/postgresql/15/main/pg_hba.conf
Verify credentials: psql -U umami -h localhost -d umami
Reset password: ALTER USER umami WITH PASSWORD 'new-password' ;
Check disk usage: df -h /var/lib/postgresql
Identify large tables: SELECT
tablename,
pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename))
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC ;
Consider archiving old data or increasing disk space.
Enable query logging: ALTER SYSTEM SET log_min_duration_statement = 1000 ;
SELECT pg_reload_conf();
Analyze query plan: EXPLAIN ANALYZE SELECT * FROM website_event WHERE website_id = 'xxx' ;
Update statistics:
Managed PostgreSQL Services
For production deployments, consider managed services:
AWS RDS Fully managed PostgreSQL with automated backups and scaling.
Google Cloud SQL Managed PostgreSQL with high availability and replication.
Azure Database PostgreSQL as a service with built-in security and monitoring.
DigitalOcean Managed databases with straightforward pricing and setup.
Managed Service Configuration
# Example with SSL required
DATABASE_URL = postgresql://user:pass@managed-host:5432/umami? sslmode = require
Next Steps
ClickHouse Use ClickHouse for high-volume analytics
Migrations Understand database migrations
Environment Variables Configure Umami settings
Troubleshooting Solve common issues