Skip to main content

Running in Production

This guide covers best practices, security considerations, and optimization strategies for running Umami in production environments.

Production Checklist

Before deploying to production:
1

Security

Change Default Password

Change the default admin password (username: admin, password: umami) immediately after first login.

Secure APP_SECRET

Use a strong, random APP_SECRET (32+ characters).

Database Credentials

Use strong passwords for database users.

Enable HTTPS

Configure SSL/TLS certificates for encrypted connections.
2

Configuration

  • Set NODE_ENV=production
  • Configure DATABASE_URL for production database
  • Set up proper HOSTNAME and PORT
  • Enable FORCE_SSL=1 if using HTTPS
  • Configure custom TRACKER_SCRIPT_NAME to avoid ad blockers
3

Infrastructure

  • Set up reverse proxy (nginx, Caddy, Traefik)
  • Configure database backups
  • Set up monitoring and alerting
  • Implement log aggregation
  • Configure firewall rules
4

Performance

  • Enable connection pooling
  • Configure caching (Redis optional)
  • Set up CDN for static assets
  • Optimize database indexes
  • Monitor resource usage

Reverse Proxy Setup

Use a reverse proxy for SSL termination, load balancing, and security.

Nginx

1

Install Nginx

sudo apt update
sudo apt install nginx
2

Create Configuration

Create /etc/nginx/sites-available/umami:
/etc/nginx/sites-available/umami
server {
    listen 80;
    server_name analytics.yourdomain.com;

    # Redirect HTTP to HTTPS
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name analytics.yourdomain.com;

    # SSL Configuration
    ssl_certificate /etc/letsencrypt/live/analytics.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/analytics.yourdomain.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;

    # Security Headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;

    # Logging
    access_log /var/log/nginx/umami_access.log;
    error_log /var/log/nginx/umami_error.log;

    # Proxy Configuration
    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }

    # Cache static assets
    location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ {
        proxy_pass http://localhost:3000;
        expires 1y;
        add_header Cache-Control "public, immutable";
    }
}
3

Enable Site

sudo ln -s /etc/nginx/sites-available/umami /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

Caddy

Caddy automatically handles SSL certificates:
Caddyfile
analytics.yourdomain.com {
    reverse_proxy localhost:3000

    # Security headers
    header {
        Strict-Transport-Security "max-age=31536000; includeSubDomains"
        X-Content-Type-Options "nosniff"
        X-Frame-Options "SAMEORIGIN"
        X-XSS-Protection "1; mode=block"
    }

    # Enable compression
    encode gzip

    # Cache static assets
    @static {
        path *.js *.css *.png *.jpg *.jpeg *.gif *.ico *.svg
    }
    header @static Cache-Control "public, max-age=31536000, immutable"
}
Start Caddy:
caddy run --config Caddyfile

Traefik

For Docker deployments with automatic SSL:
docker-compose.yml
services:
  traefik:
    image: traefik:v2.10
    command:
      - "--api.insecure=true"
      - "--providers.docker=true"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.myresolver.acme.tlschallenge=true"
      - "[email protected]"
      - "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./letsencrypt:/letsencrypt

  umami:
    image: ghcr.io/umami-software/umami:latest
    environment:
      DATABASE_URL: postgresql://umami:umami@db:5432/umami
      APP_SECRET: your-secret
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.umami.rule=Host(`analytics.yourdomain.com`)"
      - "traefik.http.routers.umami.entrypoints=websecure"
      - "traefik.http.routers.umami.tls.certresolver=myresolver"

SSL/TLS Configuration

Let’s Encrypt with Certbot

1

Install Certbot

sudo apt install certbot python3-certbot-nginx
2

Obtain Certificate

sudo certbot --nginx -d analytics.yourdomain.com
Certbot will:
  • Verify domain ownership
  • Obtain SSL certificate
  • Configure nginx automatically
  • Set up auto-renewal
3

Verify Auto-Renewal

sudo certbot renew --dry-run
Let’s Encrypt certificates are valid for 90 days. Certbot automatically renews them.

Manual SSL Certificate

If using purchased certificates:
ssl_certificate /path/to/fullchain.pem;
ssl_certificate_key /path/to/privkey.pem;

Process Management

Using PM2

PM2 keeps Umami running and restarts it on crashes:
1

Install PM2

npm install -g pm2
2

Start Umami

cd /path/to/umami
pm2 start pnpm --name umami -- start
3

Configure Auto-Start

pm2 startup
pm2 save
This creates a systemd service that starts PM2 on boot.
4

Manage Process

# View status
pm2 status

# View logs
pm2 logs umami

# Restart
pm2 restart umami

# Stop
pm2 stop umami

# Monitor
pm2 monit

Using systemd

Create a systemd service:
/etc/systemd/system/umami.service
[Unit]
Description=Umami Analytics
After=network.target postgresql.service

[Service]
Type=simple
User=umami
WorkingDirectory=/opt/umami
Environment="NODE_ENV=production"
Environment="PORT=3000"
EnvironmentFile=/opt/umami/.env
ExecStart=/usr/bin/pnpm start
Restart=on-failure
RestartSec=10
StandardOutput=journal
StandardError=journal
SyslogIdentifier=umami

[Install]
WantedBy=multi-user.target
Manage the service:
# Enable and start
sudo systemctl enable umami
sudo systemctl start umami

# Check status
sudo systemctl status umami

# View logs
sudo journalctl -u umami -f

# Restart
sudo systemctl restart umami

Performance Optimization

Database Optimization

Use PgBouncer for connection pooling:
sudo apt install pgbouncer
Configure /etc/pgbouncer/pgbouncer.ini:
[databases]
umami = host=localhost port=5432 dbname=umami

[pgbouncer]
pool_mode = transaction
max_client_conn = 100
default_pool_size = 20
Update DATABASE_URL:
DATABASE_URL=postgresql://umami:password@localhost:6432/umami
Enable Redis for caching and sessions:
# Install Redis
sudo apt install redis-server
sudo systemctl enable redis-server
Configure Umami:
.env
REDIS_URL=redis://localhost:6379
Benefits:
  • Faster session management
  • Reduced database load
  • Better performance
Umami creates indexes automatically, but verify:
-- Check indexes
\di

-- Analyze query performance
EXPLAIN ANALYZE SELECT * FROM website_event 
WHERE website_id = 'xxx' 
AND created_at > NOW() - INTERVAL '7 days';

Application Optimization

Enable Production Mode

NODE_ENV=production
Optimizes Next.js for performance

Increase Memory

NODE_OPTIONS="--max-old-space-size=4096"
Allocate more memory for Node.js

Enable Compression

Nginx/Caddy automatically compress responses

CDN for Assets

Serve static assets from CDN for global performance

Monitoring and Logging

Application Logs

# Real-time logs
pm2 logs umami

# Last 100 lines
pm2 logs umami --lines 100

# Log files location
~/.pm2/logs/

Database Monitoring

-- Active connections
SELECT count(*) FROM pg_stat_activity WHERE datname = 'umami';

-- Database size
SELECT pg_size_pretty(pg_database_size('umami'));

-- Slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;

Health Checks

Umami provides a heartbeat endpoint:
curl http://localhost:3000/api/heartbeat
Returns 200 OK if healthy. Monitor with cron:
cron
*/5 * * * * curl -f http://localhost:3000/api/heartbeat || systemctl restart umami

Backup Strategy

1

Database Backups

Daily automated backups:
backup.sh
#!/bin/bash
BACKUP_DIR="/backups/umami"
DATE=$(date +%Y%m%d_%H%M%S)

# Create backup
pg_dump -U umami -Fc umami > $BACKUP_DIR/umami_$DATE.dump

# Keep only last 30 days
find $BACKUP_DIR -name "umami_*.dump" -mtime +30 -delete
Schedule with cron:
0 2 * * * /opt/scripts/backup.sh
2

Configuration Backups

Backup .env and configuration files:
tar -czf config_backup.tar.gz .env docker-compose.yml nginx.conf
3

Offsite Storage

Upload to S3, Google Cloud Storage, or another server:
# Using rclone
rclone copy /backups/umami remote:umami-backups
Test your backup restoration process regularly to ensure backups are valid.

Security Best Practices

Firewall Configuration

# Allow SSH, HTTP, HTTPS
sudo ufw allow 22
sudo ufw allow 80
sudo ufw allow 443
sudo ufw enable

# Block direct access to app port
sudo ufw deny 3000

Regular Updates

Keep Umami and dependencies updated:
git pull
pnpm install
pnpm build
pm2 restart umami

Secure Database

  • Use strong passwords
  • Limit network access
  • Enable SSL connections
  • Regular security updates

Monitor Access

  • Review nginx access logs
  • Set up fail2ban
  • Monitor failed logins
  • Use intrusion detection

Scaling Strategies

Vertical Scaling

Increase resources on single server:
  • Upgrade CPU and RAM
  • Use faster SSD storage
  • Optimize database configuration
  • Enable Redis caching

Horizontal Scaling

For very high traffic:
  1. Load Balancer: Distribute traffic across multiple Umami instances
  2. Read Replicas: Use PostgreSQL read replicas for queries
  3. ClickHouse: Switch to ClickHouse for analytics data
  4. Redis Cluster: Distributed caching
Most self-hosted installations won’t need horizontal scaling. A properly configured single server can handle millions of pageviews.

Troubleshooting

  • Increase Node.js memory limit
  • Check for memory leaks in logs
  • Restart application periodically
  • Optimize database queries
  • Enable garbage collection: NODE_OPTIONS="--max-old-space-size=4096 --expose-gc"
  • Enable Redis caching
  • Use connection pooling
  • Check database query performance
  • Verify server resources aren’t exhausted
  • Enable nginx caching for static assets
  • Check PostgreSQL is running
  • Verify DATABASE_URL is correct
  • Increase max_connections in PostgreSQL
  • Use connection pooling (PgBouncer)
  • Check network connectivity
  • Verify certificate paths in nginx config
  • Check certificate expiration: openssl x509 -in cert.pem -noout -dates
  • Test renewal: sudo certbot renew --dry-run
  • Check DNS records point to server

Next Steps

Environment Variables

Advanced configuration options

Database Setup

Optimize database performance

Docker Installation

Production Docker deployment

Quickstart

Getting started guide

Build docs developers (and LLMs) love