Skip to main content
rs-tunnel uses Drizzle ORM for database schema management and migrations.

Migration System Overview

Drizzle provides a type-safe, SQL-first migration system:
  • Schema definition: TypeScript schema in apps/api/src/db/schema.ts
  • Migration files: SQL files in apps/api/drizzle/
  • Migration runner: Node.js script in apps/api/src/db/migrate.ts
  • Metadata: JSON journal in apps/api/drizzle/meta/
Migrations are idempotent and safe to run multiple times. Drizzle tracks applied migrations automatically.

Database Schema Structure

The rs-tunnel database consists of 9 tables:

Core Tables

users

User accounts from Slack OAuth.Key fields:
  • id (UUID primary key)
  • email (unique)
  • slack_user_id
  • slack_team_id
  • status (active/inactive)

tunnels

Tunnel lifecycle and configuration.Key fields:
  • id (UUID)
  • user_id (foreign key)
  • slug (tunnel subdomain)
  • hostname (full domain)
  • cf_tunnel_id (Cloudflare tunnel ID)
  • cf_dns_record_id (DNS record ID)
  • status (creating/active/stopped)

tunnel_leases

Heartbeat-based tunnel leases.Key fields:
  • tunnel_id (foreign key, unique)
  • last_heartbeat_at
  • expires_at

oauth_sessions

OAuth flow state tracking.Key fields:
  • state (OAuth state parameter)
  • code_challenge (PKCE challenge)
  • login_code (6-digit code)
  • status (pending/authorized/consumed)

Authentication Tables

  • refresh_tokens: Long-lived refresh tokens (30 days default)
  • audit_logs: User action audit trail

Metrics Tables

  • tunnel_live_metrics: Real-time tunnel metrics (ttl, opn, latency, etc.)
  • tunnel_metrics: Historical metrics snapshots
  • tunnel_requests: HTTP request logs (method, path, status, duration)

Cleanup Tables

  • cleanup_jobs: Background cleanup tasks for stale tunnels
import { pgTable, uuid, varchar, timestamp, integer, text, boolean, bigint, jsonb } from 'drizzle-orm/pg-core';

export const users = pgTable('users', {
  id: uuid('id').defaultRandom().primaryKey(),
  email: varchar('email', { length: 255 }).notNull().unique(),
  slackUserId: varchar('slack_user_id', { length: 255 }).notNull(),
  slackTeamId: varchar('slack_team_id', { length: 255 }).notNull(),
  status: varchar('status', { length: 32 }).notNull().default('active'),
  createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
  updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
});

export const tunnels = pgTable('tunnels', {
  id: uuid('id').defaultRandom().primaryKey(),
  userId: uuid('user_id').notNull().references(() => users.id, { onDelete: 'cascade' }),
  slug: varchar('slug', { length: 32 }).notNull(),
  hostname: varchar('hostname', { length: 255 }).notNull(),
  requestedPort: integer('requested_port').notNull(),
  cfTunnelId: varchar('cf_tunnel_id', { length: 255 }),
  cfDnsRecordId: varchar('cf_dns_record_id', { length: 255 }),
  status: varchar('status', { length: 32 }).notNull().default('creating'),
  lastError: text('last_error'),
  createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
  updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
  stoppedAt: timestamp('stopped_at', { withTimezone: true }),
});

export const tunnelLeases = pgTable('tunnel_leases', {
  id: uuid('id').defaultRandom().primaryKey(),
  tunnelId: uuid('tunnel_id').notNull().references(() => tunnels.id, { onDelete: 'cascade' }),
  lastHeartbeatAt: timestamp('last_heartbeat_at', { withTimezone: true }).notNull(),
  expiresAt: timestamp('expires_at', { withTimezone: true }).notNull(),
});

// ... other tables (refresh_tokens, oauth_sessions, metrics, etc.)

Running Migrations

Prerequisites

  1. PostgreSQL is running and accessible
  2. DATABASE_URL environment variable is set
  3. Database exists (created automatically by PostgreSQL Docker container)

Run Migrations

1

Start PostgreSQL

docker compose up -d postgres
Wait for database to be ready:
docker compose exec postgres pg_isready -U postgres
2

Set DATABASE_URL

Ensure .env contains:
DATABASE_URL=postgres://postgres:postgres@localhost:23432/rs_tunnel
Use port 23432 when connecting from host machine (Docker exposes it on this port).
3

Run migration command

pnpm --filter @ripeseed/api db:migrate
Expected output:
Migrations applied
4

Verify schema

Connect to database:
docker compose exec postgres psql -U postgres rs_tunnel
List tables:
\dt
You should see all 9 tables:
             List of relations
 Schema |        Name         | Type  |  Owner   
--------+---------------------+-------+----------
 public | audit_logs          | table | postgres
 public | cleanup_jobs        | table | postgres
 public | oauth_sessions      | table | postgres
 public | refresh_tokens      | table | postgres
 public | tunnel_leases       | table | postgres
 public | tunnel_live_metrics | table | postgres
 public | tunnel_metrics      | table | postgres
 public | tunnel_requests     | table | postgres
 public | tunnels             | table | postgres
 public | users               | table | postgres

Migration Files

All migration SQL files are in apps/api/drizzle/:
CREATE EXTENSION IF NOT EXISTS "pgcrypto";

CREATE TABLE IF NOT EXISTS users (
  id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
  email varchar(255) NOT NULL UNIQUE,
  slack_user_id varchar(255) NOT NULL,
  slack_team_id varchar(255) NOT NULL,
  status varchar(32) NOT NULL DEFAULT 'active',
  created_at timestamptz NOT NULL DEFAULT now(),
  updated_at timestamptz NOT NULL DEFAULT now()
);

CREATE TABLE IF NOT EXISTS tunnels (
  id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id uuid NOT NULL REFERENCES users(id) ON DELETE CASCADE,
  slug varchar(32) NOT NULL,
  hostname varchar(255) NOT NULL,
  requested_port integer NOT NULL,
  cf_tunnel_id varchar(255),
  cf_dns_record_id varchar(255),
  status varchar(32) NOT NULL DEFAULT 'creating',
  last_error text,
  created_at timestamptz NOT NULL DEFAULT now(),
  updated_at timestamptz NOT NULL DEFAULT now(),
  stopped_at timestamptz
);

-- Unique index: only one active tunnel per hostname
CREATE UNIQUE INDEX IF NOT EXISTS tunnels_hostname_idx 
  ON tunnels(hostname) 
  WHERE status != 'stopped';

-- ... more tables and indexes

Migration Runner

The migration script is in apps/api/src/db/migrate.ts:1-20:
migrate.ts
import { migrate } from 'drizzle-orm/node-postgres/migrator';
import path from 'node:path';
import { fileURLToPath } from 'node:url';

import { db, pool } from './client.js';

async function run(): Promise<void> {
  const dirname = path.dirname(fileURLToPath(import.meta.url));
  const migrationsFolder = path.resolve(dirname, '../../drizzle');
  await migrate(db, { migrationsFolder });
  await pool.end();
  console.log('Migrations applied');
}

run().catch(async (error) => {
  console.error('Migration failed', error);
  await pool.end();
  process.exit(1);
});
How it works:
  1. Resolves migration folder path: apps/api/drizzle/
  2. Calls migrate() from Drizzle ORM
  3. Drizzle reads drizzle/meta/_journal.json to determine which migrations to apply
  4. Executes unapplied SQL files in order
  5. Updates migration tracking table
  6. Closes database connection
Drizzle creates a __drizzle_migrations table to track which migrations have been applied.

Generating New Migrations

When you modify schema.ts, generate a new migration:
1

Update schema

Edit apps/api/src/db/schema.ts with your changes.
2

Generate migration

pnpm --filter @ripeseed/api db:generate
This creates a new SQL file in apps/api/drizzle/.
3

Review migration

Inspect the generated SQL to ensure it matches your intent.
4

Apply migration

pnpm --filter @ripeseed/api db:migrate
Always review generated migrations before applying them to production. Drizzle may not handle all schema changes perfectly (e.g., renaming columns).

Database Indexes

rs-tunnel uses several indexes for query performance:

Unique Indexes

  • tunnels_hostname_idx: Ensures only one active tunnel per hostname (partial index: WHERE status != 'stopped')
  • oauth_sessions_state_idx: Fast OAuth state lookup
  • oauth_sessions_login_code_idx: Fast login code validation
  • refresh_tokens_token_hash_idx: Fast refresh token validation
  • tunnel_leases_tunnel_id_idx: One lease per tunnel

Performance Indexes

  • tunnels_user_status_idx: User’s active tunnels lookup (schema.ts:88)
  • tunnels_slug_status_idx: Slug availability checks (schema.ts:89)
  • tunnel_leases_expires_at_idx: Expired lease cleanup (schema.ts:105)
  • tunnel_live_metrics_received_at_idx: Recent metrics queries (schema.ts:129)
  • tunnel_metrics_tunnel_captured_at_idx: Historical metrics (schema.ts:152)
  • tunnel_requests_tunnel_ingested_at_idx: Request log queries (schema.ts:174)

Foreign Key Relationships

Cascade delete rules:
  • Deleting a user cascades to:
    • refresh_tokens
    • tunnels (and all child tables)
  • Deleting a tunnel cascades to:
    • tunnel_leases
    • tunnel_live_metrics
    • tunnel_metrics
    • tunnel_requests
    • cleanup_jobs

Troubleshooting

Cause: Migration metadata is missing.Solution:
  1. Verify apps/api/drizzle/meta/_journal.json exists in the repository
  2. If missing, regenerate migrations:
    pnpm --filter @ripeseed/api db:generate
    
  3. This file should be committed to version control
Cause: DATABASE_URL is not set or invalid.Solution:
  1. Check .env file exists and is loaded
  2. Verify format: postgres://user:password@host:port/database
  3. Run migration from API directory:
    cd apps/api
    pnpm db:migrate
    
Cause: Database not created.Solution: Create the database manually:
docker compose exec postgres psql -U postgres -c "CREATE DATABASE rs_tunnel;"
Or use the Docker environment variable POSTGRES_DB=rs_tunnel (already set in docker-compose.yml).
Cause: Database not ready or network issue.Solution:
  1. Check PostgreSQL is running:
    docker compose ps postgres
    
  2. Test connection:
    docker compose exec postgres pg_isready -U postgres
    
  3. Check PostgreSQL logs:
    docker compose logs postgres
    
Cause: Migration already applied, or manual schema creation.Solution: This is usually safe to ignore. Migrations use CREATE TABLE IF NOT EXISTS.If you need to start fresh:
# WARNING: Deletes all data
docker compose down -v
docker compose up -d postgres
pnpm --filter @ripeseed/api db:migrate

Schema Validation

Drizzle provides type-safe schema validation at runtime. The schema is defined in TypeScript and enforced by Zod in the API routes. Example validation from env.ts:16-52:
import { z } from 'zod';

const envSchema = z.object({
  DATABASE_URL: z.string().min(1),
  PORT: z.coerce.number().int().positive().default(8080),
  // ... other validations
});

export const env = envSchema.parse(process.env);
This ensures configuration errors are caught at startup, not during runtime.

Production Best Practices

Never run migrations automatically in production. Always review and test migrations in staging first.

Migration Checklist

  1. Test in local environment
    pnpm --filter @ripeseed/api db:migrate
    
  2. Backup production database
    pg_dump -Fc $DATABASE_URL > backup_$(date +%Y%m%d_%H%M%S).dump
    
  3. Apply to staging
    DATABASE_URL=$STAGING_DB_URL pnpm --filter @ripeseed/api db:migrate
    
  4. Test application functionality
    • Verify API starts successfully
    • Test user authentication
    • Create/stop test tunnels
  5. Apply to production during maintenance window
    DATABASE_URL=$PROD_DB_URL pnpm --filter @ripeseed/api db:migrate
    
  6. Monitor application logs
    docker compose logs -f api
    

Rollback Strategy

Drizzle doesn’t have built-in rollback. For safety:
  1. Always backup before migrations
  2. Write reversible migrations when possible
  3. Test rollback in staging:
    # Restore from backup
    pg_restore -d $DATABASE_URL backup.dump
    

Next Steps

Prerequisites

System requirements and account setup

Environment Variables

Complete configuration reference

Build docs developers (and LLMs) love