Skip to main content

Database Setup

Umami requires PostgreSQL 12.14 or higher for storing analytics data. This guide covers database setup, optimization, and maintenance.

Supported Databases

PostgreSQL

Recommended. Version 12.14+ required. Full feature support and best performance.

MySQL

Deprecated. Limited support in v2+. Not recommended for new installations.
Umami v2+ focuses on PostgreSQL. All new features and optimizations target PostgreSQL.

PostgreSQL Installation

Linux (Ubuntu/Debian)

1

Install PostgreSQL

sudo apt update
sudo apt install postgresql postgresql-contrib
2

Start PostgreSQL

sudo systemctl start postgresql
sudo systemctl enable postgresql
3

Verify Installation

sudo systemctl status postgresql
psql --version

macOS

# Install PostgreSQL 15
brew install postgresql@15

# Start service
brew services start postgresql@15

# Verify
psql postgres

Windows

  1. Download installer from postgresql.org
  2. Run the installer
  3. Choose installation directory
  4. Set superuser password
  5. Select port (default: 5432)
  6. Complete installation

Docker

Run PostgreSQL in a container:
docker run -d \
  --name postgres \
  -e POSTGRES_DB=umami \
  -e POSTGRES_USER=umami \
  -e POSTGRES_PASSWORD=umami \
  -v postgres-data:/var/lib/postgresql/data \
  -p 5432:5432 \
  postgres:15-alpine

Database Creation

Create Umami Database

1

Connect to PostgreSQL

sudo -u postgres psql
2

Create Database and User

-- Create database
CREATE DATABASE umami;

-- Create user with password
CREATE USER umami WITH PASSWORD 'your-secure-password';

-- Grant all privileges
GRANT ALL PRIVILEGES ON DATABASE umami TO umami;

-- Exit
\q
Replace your-secure-password with a strong password. Never use default passwords in production.
3

Test Connection

psql -U umami -d umami -h localhost
Enter the password when prompted.

Connection URL Format

Umami uses a PostgreSQL connection string:
DATABASE_URL=postgresql://username:password@hostname:port/database
Components:
  • username: Database user (e.g., umami)
  • password: User password
  • hostname: Server address (e.g., localhost, db, or IP)
  • port: PostgreSQL port (default: 5432)
  • database: Database name (e.g., umami)
Examples:
DATABASE_URL=postgresql://umami:mypassword@localhost:5432/umami

Schema Initialization

Umami automatically creates database tables on first build. When you run the build command, Umami:
  1. Connects to the database
  2. Runs Prisma migrations
  3. Creates all required tables
  4. Sets up indexes
  5. Creates default admin user
# For source installation
pnpm run build

# For Docker
docker compose up -d
The build process creates tables from the Prisma schema located at prisma/schema.prisma.

Manual Migration

If automatic migration fails:
# Generate Prisma client
pnpm run build-db-client

# Run migrations
pnpm prisma migrate deploy

# Or reset database (WARNING: deletes all data)
pnpm prisma migrate reset

Verify Tables

Check that tables were created:
-- Connect to database
psql -U umami -d umami

-- List tables
\dt

-- Should show:
-- user
-- website
-- session
-- website_event
-- event_data
-- session_data
-- team
-- team_user
-- report
-- And more...

Database Schema

Umami’s database schema includes:
user: User accounts and authentication
  • user_id (UUID, primary key)
  • username (unique)
  • password (hashed with bcrypt)
  • role (admin, user, view-only)
  • created_at, updated_at
website: Tracked websites
  • website_id (UUID, primary key)
  • name, domain
  • share_id (for public dashboards)
  • user_id, team_id
  • Timestamps
session: Visitor sessions
  • session_id (UUID, primary key)
  • website_id
  • Browser, OS, device info
  • Location data (country, region, city)
  • Language, screen resolution
website_event: Pageviews and events
  • event_id (UUID, primary key)
  • website_id, session_id
  • url, referrer
  • event_type, event_name
  • Timestamp
  • event_data: Custom event properties
  • session_data: Custom session data
  • team: Team/organization management
  • team_user: Team membership
  • report: Custom reports
  • revenue: Revenue tracking (optional)

Performance Optimization

Indexes

Umami automatically creates indexes for common queries:
-- Check existing indexes
\di

-- Key indexes:
-- session(website_id, created_at)
-- session(website_id, created_at, browser)
-- session(website_id, created_at, country)
-- website_event(website_id, created_at)
-- And many more...
The Prisma schema defines all indexes. They’re created automatically during migration.

Vacuum and Analyze

Regularly maintain your database:
-- Analyze tables for query optimization
ANALYZE;

-- Vacuum to reclaim space
VACUUM;

-- Full vacuum (requires more time)
VACUUM FULL;

-- Auto-vacuum (recommended)
ALTER TABLE session SET (autovacuum_enabled = true);

Connection Pooling

For high-traffic sites, use connection pooling:
Install PgBouncer:
sudo apt install pgbouncer
Configure /etc/pgbouncer/pgbouncer.ini:
[databases]
umami = host=localhost port=5432 dbname=umami

[pgbouncer]
listen_addr = *
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 100
default_pool_size = 20
Update DATABASE_URL:
DATABASE_URL=postgresql://umami:password@localhost:6432/umami

Query Performance

Optimize slow queries:
-- Enable query logging
ALTER DATABASE umami SET log_min_duration_statement = 1000;

-- Check slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;

-- Analyze query plan
EXPLAIN ANALYZE SELECT * FROM website_event WHERE website_id = 'xxx';

Backup and Restore

Backup Database

# Backup to SQL file
pg_dump -U umami -d umami > umami_backup.sql

# Compressed backup
pg_dump -U umami -d umami | gzip > umami_backup.sql.gz

Restore Database

# Restore from SQL file
psql -U umami -d umami < umami_backup.sql

# From compressed
gunzip -c umami_backup.sql.gz | psql -U umami -d umami

Automated Backups

Schedule regular backups with cron:
cron
# Edit crontab
crontab -e

# Add daily backup at 2 AM
0 2 * * * pg_dump -U umami -Fc umami > /backups/umami_$(date +\%Y\%m\%d).dump

# Weekly backup with 30-day retention
0 3 * * 0 find /backups -name "umami_*.dump" -mtime +30 -delete
Backup Best Practices:
  • Store backups on different server/disk
  • Test restore process regularly
  • Encrypt sensitive backups
  • Keep multiple backup versions
  • Document restore procedures

Data Retention

Clean Old Data

Remove old analytics data:
-- Delete events older than 1 year
DELETE FROM website_event 
WHERE created_at < NOW() - INTERVAL '1 year';

-- Delete old sessions
DELETE FROM session 
WHERE created_at < NOW() - INTERVAL '1 year';

-- Vacuum after deletion
VACUUM FULL;

Automated Cleanup

Create a cleanup script:
cleanup.sql
-- Clean up old data (90 days)
DELETE FROM website_event WHERE created_at < NOW() - INTERVAL '90 days';
DELETE FROM session WHERE created_at < NOW() - INTERVAL '90 days';
VACUUM ANALYZE;
Schedule with cron:
# Monthly cleanup
0 4 1 * * psql -U umami -d umami -f /path/to/cleanup.sql

Monitoring

Database Size

-- Database size
SELECT pg_size_pretty(pg_database_size('umami'));

-- Table sizes
SELECT 
  schemaname,
  tablename,
  pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

Active Connections

-- Current connections
SELECT count(*) FROM pg_stat_activity WHERE datname = 'umami';

-- Connection details
SELECT 
  pid,
  usename,
  application_name,
  client_addr,
  state,
  query
FROM pg_stat_activity
WHERE datname = 'umami';

Performance Stats

-- Cache hit ratio (should be > 90%)
SELECT 
  sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) * 100 
AS cache_hit_ratio
FROM pg_statio_user_tables;

-- Table statistics
SELECT * FROM pg_stat_user_tables WHERE schemaname = 'public';

Troubleshooting

Check PostgreSQL is running:
sudo systemctl status postgresql
Verify port is open:
netstat -an | grep 5432
Check pg_hba.conf allows connections:
sudo nano /etc/postgresql/15/main/pg_hba.conf
Add:
host    all    all    0.0.0.0/0    md5
Increase max connections in postgresql.conf:
max_connections = 200
Or use connection pooling (PgBouncer).Restart PostgreSQL:
sudo systemctl restart postgresql
  • Add missing indexes
  • Run VACUUM ANALYZE
  • Check pg_stat_statements for slow queries
  • Optimize queries in application
  • Consider upgrading hardware
  • Use ClickHouse for very high volume
  • Delete old analytics data
  • Run VACUUM FULL to reclaim space
  • Archive old data to backup
  • Increase disk size
  • Set up data retention policy

Next Steps

Environment Variables

Configure DATABASE_URL and other settings

Running in Production

Production deployment guide

Docker Installation

Deploy with Docker and PostgreSQL

Source Installation

Build from source with PostgreSQL

Build docs developers (and LLMs) love