Skip to main content
Autonome uses PostgreSQL with Drizzle ORM for type-safe database access. This guide covers database setup, migrations, and best practices for production deployments.

Database Requirements

  • PostgreSQL: Version 15 or higher
  • Extensions: None required (uses standard SQL)
  • SSL: Recommended for production
  • Connection Pooling: Built-in with Drizzle

Schema Overview

The database schema includes:
  • "Models": AI trading model configurations
  • "Orders": Trading positions and closed trades (SSOT)
  • "PortfolioSnapshots": Historical portfolio value tracking
  • "Conversations": AI chat history and decision logs
  • "CryptoPrice": Real-time and historical price data
Table names use quoted identifiers with capital letters (e.g., "Models"). Always quote them in SQL queries to avoid case-sensitivity issues.

Drizzle Configuration

The drizzle.config.ts file defines the ORM settings:
drizzle.config.ts
import { env } from '@/env';
import { defineConfig } from 'drizzle-kit'

export default defineConfig({
  dialect: "postgresql",
  out: './drizzle',
  schema: './src/db/schema.ts',
  dbCredentials: {
    url: env.DATABASE_URL || "",
    ssl: {
      rejectUnauthorized: false,
    },
  },
})

Configuration Details

  • schema: TypeScript schema definitions at src/db/schema.ts
  • out: Generated SQL migrations stored in drizzle/
  • ssl: Enabled with rejectUnauthorized: false for self-signed certs

Database Connection

The application uses T3 Env for type-safe environment variable access:
src/env.ts
export const env = createEnv({
  server: {
    DATABASE_URL: z.string().url().optional(),
    // ... other variables
  },
  runtimeEnv: { ...process.env, ...import.meta.env },
  emptyStringAsUndefined: true,
});

Connection String Format

DATABASE_URL=postgresql://user:password@host:port/database?sslmode=require
Components:
  • user: Database username
  • password: Database password
  • host: Database server hostname or IP
  • port: PostgreSQL port (default: 5432)
  • database: Database name
  • sslmode: require, prefer, or disable
Never commit DATABASE_URL to version control. Use .env files locally and environment variables in production.

Migration Workflow

Generate Migrations

After modifying src/db/schema.ts, generate migration files:
bun run db:generate
This creates timestamped SQL files in drizzle/ like:
drizzle/
├── 0000_create_models.sql
├── 0001_create_orders.sql
└── meta/
    └── _journal.json

Apply Migrations

Run migrations against the database:
bun run db:migrate
This executes all pending migrations in order.

Push Schema (Development)

For rapid prototyping, push schema changes directly:
bun run db:push
db:push bypasses migrations and directly syncs schema. Use only in development or when you don’t need migration history.

Schema Management

Key Schema Rules

  1. Quoted Identifiers: All table names use capital letters and require quotes
    SELECT * FROM "Models" WHERE "id" = '...'
    
  2. Text IDs: Primary keys are TEXT (UUID), not SERIAL or INTEGER
    id: text("id").$defaultFn(() => crypto.randomUUID()).primaryKey()
    
  3. Monetary Fields: Stored as TEXT, cast to NUMERIC for calculations
    SELECT CAST("netPortfolio" AS NUMERIC) FROM "Orders"
    
  4. JSONB for Complex Data: Exit plans, AI reasoning stored as JSONB
    exitPlan: jsonb("exitPlan").$type<ExitPlan>()
    

Example Schema Definition

src/db/schema.ts
import { pgTable, text, timestamp, jsonb } from 'drizzle-orm/pg-core';

export const Models = pgTable("Models", {
  id: text("id").$defaultFn(() => crypto.randomUUID()).primaryKey(),
  variant: text("variant").notNull(),
  providerId: text("providerId").notNull(),
  modelId: text("modelId").notNull(),
  netPortfolio: text("netPortfolio").notNull(),
  createdAt: timestamp("createdAt").defaultNow().notNull(),
  updatedAt: timestamp("updatedAt").defaultNow().notNull(),
});

export const Orders = pgTable("Orders", {
  id: text("id").$defaultFn(() => crypto.randomUUID()).primaryKey(),
  modelId: text("modelId").notNull().references(() => Models.id),
  symbol: text("symbol").notNull(),
  side: text("side").notNull(),
  quantity: text("quantity").notNull(),
  entryPrice: text("entryPrice").notNull(),
  exitPrice: text("exitPrice"),
  exitPlan: jsonb("exitPlan").$type<ExitPlan>(),
  status: text("status").notNull(), // "OPEN" or "CLOSED"
  createdAt: timestamp("createdAt").defaultNow().notNull(),
  closedAt: timestamp("closedAt"),
});

Database Seeding

Seed initial data for development:
bun run db:seed
This runs scripts/seed.ts which:
  1. Clears existing data
  2. Creates default AI models (Apex, Trendsurfer, Contrarian, Sovereign)
  3. Optionally creates sample trades and positions

Seed Script Example

scripts/seed.ts
import { db } from '@/server/db';
import { Models } from '@/db/schema';

const defaultModels = [
  { variant: "Apex", providerId: "anthropic", modelId: "claude-3-opus" },
  { variant: "Trendsurfer", providerId: "openai", modelId: "gpt-4-turbo" },
  { variant: "Contrarian", providerId: "nim", modelId: "llama-3-70b" },
  { variant: "Sovereign", providerId: "mistral", modelId: "mistral-large" },
];

for (const model of defaultModels) {
  await db.insert(Models).values({
    ...model,
    netPortfolio: "10000",
  });
}

console.log("✅ Database seeded successfully");

Production Database Setup

Use a managed service for automatic backups, scaling, and monitoring:
  • Neon: Serverless PostgreSQL with free tier
  • Supabase: PostgreSQL with additional features
  • Railway: Simple managed PostgreSQL
  • AWS RDS: Enterprise-grade with multi-AZ
Neon Example:
  1. Create project at neon.tech
  2. Copy connection string
  3. Set DATABASE_URL environment variable
  4. Run migrations: bun run db:migrate

Option 2: Self-Hosted with Docker

Included in docker-compose.yml:
db:
  image: postgres:16-alpine
  container_name: autonome-db
  restart: always
  environment:
    POSTGRES_USER: autonome
    POSTGRES_PASSWORD: strong_password_here
    POSTGRES_DB: autonome
  volumes:
    - postgres_data:/var/lib/postgresql/data
  healthcheck:
    test: ["CMD-SHELL", "pg_isready -U autonome -d autonome"]
    interval: 10s
    timeout: 5s
    retries: 5
Start with:
docker-compose up -d db

Database Initialization

1

Create Database

If using self-hosted PostgreSQL:
psql -U postgres
CREATE DATABASE autonome;
CREATE USER autonome WITH PASSWORD 'strong_password';
GRANT ALL PRIVILEGES ON DATABASE autonome TO autonome;
2

Configure Connection

Set DATABASE_URL in .env:
DATABASE_URL=postgresql://autonome:strong_password@localhost:5432/autonome
3

Run Migrations

bun run db:migrate
4

Seed Data

bun run db:seed
5

Verify Schema

bun run db:studio
This launches Drizzle Studio at https://local.drizzle.studio

Database Maintenance

Backups

Manual Backup:
pg_dump -U autonome -h localhost autonome > backup_$(date +%Y%m%d).sql
Automated Backups (cron job):
# Add to crontab
0 2 * * * pg_dump -U autonome autonome > /backups/autonome_$(date +\%Y\%m\%d).sql
Docker Backup:
docker exec autonome-db pg_dump -U autonome autonome > backup.sql

Restore

psql -U autonome -h localhost autonome < backup_20260307.sql
Or with Docker:
cat backup.sql | docker exec -i autonome-db psql -U autonome autonome

Vacuum and Analyze

Regularly optimize database performance:
VACUUM ANALYZE "Models";
VACUUM ANALYZE "Orders";
VACUUM ANALYZE "PortfolioSnapshots";

Index Optimization

Add indexes for frequently queried columns:
CREATE INDEX idx_orders_model_id ON "Orders"("modelId");
CREATE INDEX idx_orders_status ON "Orders"("status");
CREATE INDEX idx_orders_created_at ON "Orders"("createdAt");

Monitoring

Connection Pool Status

Drizzle automatically manages connection pooling. Monitor active connections:
SELECT count(*) FROM pg_stat_activity WHERE datname = 'autonome';

Slow Query Log

Enable slow query logging in postgresql.conf:
log_min_duration_statement = 1000  # Log queries > 1 second

Database Size

SELECT pg_size_pretty(pg_database_size('autonome'));

Table Sizes

SELECT 
  relname AS table_name,
  pg_size_pretty(pg_total_relation_size(relid)) AS size
FROM pg_catalog.pg_statio_user_tables
ORDER BY pg_total_relation_size(relid) DESC;

Troubleshooting

Migration Fails

Problem: db:migrate returns errors Solutions:
  1. Check drizzle/meta/_journal.json for migration state
  2. Manually inspect SQL files in drizzle/
  3. Verify database connection: psql $DATABASE_URL
  4. Check for schema conflicts (duplicate tables/columns)
  5. Use db:push to force sync (development only)

Connection Timeouts

Problem: API can’t connect to database Solutions:
  1. Verify DATABASE_URL format and credentials
  2. Check firewall rules (port 5432)
  3. Ensure PostgreSQL is running: systemctl status postgresql
  4. Test connection: psql $DATABASE_URL
  5. Check SSL settings (remove ?sslmode=require if not using SSL)

Quoted Identifier Errors

Problem: relation "models" does not exist Solutions:
  1. PostgreSQL lowercases unquoted identifiers
  2. Always quote table names: SELECT * FROM "Models"
  3. Use Drizzle ORM to avoid manual SQL
  4. Check that migrations preserve quotes

Data Type Mismatch

Problem: Cannot cast TEXT to NUMERIC Solutions:
  1. Use explicit casts: CAST("netPortfolio" AS NUMERIC)
  2. Ensure monetary values are valid numbers (no empty strings)
  3. Validate data before insert/update
  4. Use Drizzle’s type system to catch errors at compile time

Best Practices

  1. Always Use Migrations: Don’t manually modify production schema
  2. Test Locally First: Run migrations on dev database before production
  3. Backup Before Migration: Create backup before applying schema changes
  4. Monitor Performance: Use EXPLAIN ANALYZE for slow queries
  5. Use Indexes: Add indexes for foreign keys and frequently filtered columns
  6. Connection Pooling: Let Drizzle handle connection management
  7. SSL in Production: Always use encrypted connections

Next Steps

Backend Deployment

Deploy the API server with database access

Schema Reference

Detailed schema documentation

Build docs developers (and LLMs) love