Macondo Link Manager uses PostgreSQL 15 with Prisma ORM for type-safe database access and migrations.
Database Architecture
Domain Model
The application follows a hierarchical domain model:
User (Agency Staff)
│
Client (Customer)
├── Campaign
│ └── Link
└── Link (can exist without campaign)
├── Tags (many-to-many)
└── Clicks
Key Principles
Links are global to the organization
Metrics use persisted state (no runtime heuristics)
Aggregations are backend responsibility
Critical operations are transactional
Cascade deletes maintain referential integrity
Schema Overview
The database schema is defined in api/prisma/schema.prisma.
Core Models
User
Agency staff members who create links:
model User {
id String @id @default ( uuid ())
email String @unique
name String
avatarUrl String ?
createdAt DateTime @default ( now ())
updatedAt DateTime @updatedAt
links Link []
@@map ( "users" )
}
Authenticated via Google OAuth
Email restricted to corporate domain
One user can create many links
Client
Customers of the agency:
model Client {
id String @id @default ( uuid ())
name String @unique
createdAt DateTime @default ( now ())
updatedAt DateTime @updatedAt
links Link []
campaigns Campaign []
@@map ( "clients" )
}
Unique client names
Can have multiple campaigns
Can have links associated directly
Campaign
Marketing campaigns:
model Campaign {
id String @id @default ( uuid ())
name String
clientId String
createdAt DateTime @default ( now ())
updatedAt DateTime @updatedAt
client Client @relation ( fields : [ clientId ], references : [ id ], onDelete : Cascade )
links Link []
@@map ( "campaigns" )
}
Always belongs to a client
Cascade delete: deleting a client removes campaigns
Links can optionally belong to a campaign
Link
Shortened URLs:
model Link {
id String @id @default ( uuid ())
originalUrl String @db.Text
shortCode String @unique
userId String
clientId String
campaignId String ?
createdAt DateTime @default ( now ())
updatedAt DateTime @updatedAt
user User @relation ( fields : [ userId ], references : [ id ], onDelete : Cascade )
client Client @relation ( fields : [ clientId ], references : [ id ], onDelete : Cascade )
campaign Campaign ? @relation ( fields : [ campaignId ], references : [ id ], onDelete : SetNull )
tags LinkTag []
clicks Click []
@@index ( [ userId ] )
@@index ( [ clientId ] )
@@index ( [ campaignId ] )
@@map ( "links" )
}
shortCode is unique identifier for the short URL
originalUrl stored as TEXT for unlimited length
campaignId is optional (nullable)
Cascade behavior :
Deleting user → deletes their links
Deleting client → deletes their links
Deleting campaign → sets campaignId to NULL
Indexed on foreign keys for performance
Tag
Categorization system:
model Tag {
id String @id @default ( uuid ())
name String @unique
createdAt DateTime @default ( now ())
updatedAt DateTime @updatedAt
links LinkTag []
@@map ( "tags" )
}
Unique tag names
Many-to-many relationship with Links
Automatically created when used
LinkTag (Junction Table)
Many-to-many relationship:
model LinkTag {
linkId String
tagId String
assignedAt DateTime @default ( now ())
link Link @relation ( fields : [ linkId ], references : [ id ], onDelete : Cascade )
tag Tag @relation ( fields : [ tagId ], references : [ id ], onDelete : Cascade )
@@id ( [ linkId , tagId ] )
@@index ( [ tagId ] )
@@map ( "link_tags" )
}
Composite primary key prevents duplicates
Cascade delete removes orphaned relationships
API handles this transparently (sends/receives string[] of tag names)
Click
Analytics and metrics:
model Click {
id String @id @default ( uuid ())
linkId String
timestamp DateTime @default ( now ())
ipAddress String ?
userAgent String ? @db.Text
country String ?
city String ?
isBot Boolean @default ( false )
botReason String ?
link Link @relation ( fields : [ linkId ], references : [ id ], onDelete : Cascade )
@@index ( [ linkId ] )
@@index ( [ timestamp ] )
@@map ( "clicks" )
}
Bot detection at write-time : isBot field persisted
GeoIP data from MaxMind GeoLite2 (country, city)
User agent stored as TEXT
Indexed on linkId and timestamp for efficient queries
Metrics filter out isBot = true
Database Migrations
Prisma uses a migration system to evolve your schema safely.
Development Workflow
Edit api/prisma/schema.prisma:
model Link {
// ... existing fields
// Add new field
description String ?
}
cd api
npx prisma migrate dev --name add_link_description
Generate SQL migration file
Apply migration to development database
Regenerate Prisma Client
Check the generated SQL in api/prisma/migrations/*/migration.sql:
ALTER TABLE "links" ADD COLUMN "description" TEXT ;
Commit both schema and migration files:
git add api/prisma/schema.prisma
git add api/prisma/migrations/
git commit -m "feat: add description field to links"
Production Deployment
Never use prisma migrate dev in production. Always use prisma migrate deploy.
cd api
npx prisma migrate deploy
This applies pending migrations without prompts or side effects.
Migration History
View migration history:
cd api
npx prisma migrate status
Existing migrations:
Date Name Description 2025-10-29 initial_setupInitial database schema 2026-01-14 add_geoip_to_clicksAdded country/city fields 2026-02-05 add_bot_classification_to_clicksAdded isBot/botReason fields
Common Operations
Prisma Studio
Visual database browser:
Access at http://localhost:5555 to:
View and edit data
Browse relationships
Test queries
Generate Prisma Client
After schema changes:
cd api
npx prisma generate
This generates TypeScript types and query methods.
Validate Schema
Check for errors:
cd api
npx prisma validate
Auto-format the schema file:
Pull Database Schema
Introspect existing database:
cd api
npx prisma db pull
This overwrites your schema file. Use with caution.
Push Schema (Development Only)
Quickly sync schema without migrations:
cd api
npx prisma db push
Only for prototyping. Don’t use in production or with important data.
Reset Database
Destructive operation. Deletes all data.
cd api
npx prisma migrate reset
This will:
Drop the database
Create a new database
Apply all migrations
Run seed script (if configured)
Backup and Restore
Create Backup
# Backup entire database
pg_dump $DATABASE_URL > backup.sql
# Backup with compression
pg_dump $DATABASE_URL | gzip > backup.sql.gz
# Backup specific tables
pg_dump $DATABASE_URL -t users -t clients -t links > backup_partial.sql
Restore Backup
# Restore from SQL file
psql $DATABASE_URL < backup.sql
# Restore from compressed file
gunzip -c backup.sql.gz | psql $DATABASE_URL
Automated Backup Script
#!/bin/bash
# backup-db.sh
DATABASE_URL = "postgresql://user:pass@host:5432/db"
BACKUP_DIR = "/backups"
TIMESTAMP = $( date +%Y%m%d_%H%M%S )
FILENAME = "macondo_links_${ TIMESTAMP }.sql.gz"
mkdir -p $BACKUP_DIR
pg_dump $DATABASE_URL | gzip > $BACKUP_DIR / $FILENAME
echo "Backup created: $BACKUP_DIR / $FILENAME "
# Keep only last 30 days
find $BACKUP_DIR -name "macondo_links_*.sql.gz" -mtime +30 -delete
Schedule with cron:
# Daily backup at 2 AM
0 2 * * * /path/to/backup-db.sh
Database Seeding
Currently, the project does not include a seed script. To create one:
Create api/prisma/seed.ts:
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient ()
async function main () {
// Create test user
const user = await prisma . user . create ({
data: {
email: '[email protected] ' ,
name: 'Test User' ,
avatarUrl: 'https://example.com/avatar.png' ,
},
})
// Create test client
const client = await prisma . client . create ({
data: {
name: 'Test Client' ,
},
})
// Create test campaign
const campaign = await prisma . campaign . create ({
data: {
name: 'Test Campaign' ,
clientId: client . id ,
},
})
// Create test link
const link = await prisma . link . create ({
data: {
originalUrl: 'https://example.com' ,
shortCode: 'testlink' ,
userId: user . id ,
clientId: client . id ,
campaignId: campaign . id ,
},
})
console . log ( '✅ Database seeded successfully' )
console . log ({ user , client , campaign , link })
}
main ()
. catch (( e ) => {
console . error ( '❌ Seed failed:' , e )
process . exit ( 1 )
})
. finally ( async () => {
await prisma . $disconnect ()
})
{
"prisma" : {
"seed" : "ts-node prisma/seed.ts"
}
}
cd api
npx prisma db seed
Schema Evolution
Adding a Field
model Link {
// ... existing fields
// Add optional field (safe)
description String ?
}
Migration:
npx prisma migrate dev --name add_link_description
Adding a Required Field
For existing data, use a two-step approach:
model Link {
// ... existing fields
views Int ?
}
npx prisma migrate dev --name add_views_nullable
npx prisma studio
# Or write a script to set default values
model Link {
// ... existing fields
views Int @default ( 0 )
}
npx prisma migrate dev --name make_views_required
Renaming a Field
Renaming generates a drop + add migration, which loses data. Use @@map instead.
Safe approach:
model Link {
// Keep database column name, change Prisma name
numberOfViews Int @map ( "views" )
}
Deleting a Field
Deleting a field permanently removes data. Backup first.
model Link {
// Remove field
// description String?
}
npx prisma migrate dev --name remove_link_description
Indexes
Current indexes in the schema:
model Link {
// ...
@@index ( [ userId ] )
@@index ( [ clientId ] )
@@index ( [ campaignId ] )
}
model Click {
// ...
@@index ( [ linkId ] )
@@index ( [ timestamp ] )
}
model LinkTag {
// ...
@@index ( [ tagId ] )
}
Add indexes for frequently queried fields:
model Link {
// ...
@@index ( [ createdAt ] )
@@index ( [ shortCode ] ) // Already unique, index automatic
}
Query Optimization
// Bad: N+1 query problem
const links = await prisma . link . findMany ()
for ( const link of links ) {
const clicks = await prisma . click . count ({ where: { linkId: link . id } })
}
// Good: Single query with aggregation
const links = await prisma . link . findMany ({
include: {
_count: {
select: { clicks: true }
}
}
})
Connection Pooling
For production, use PgBouncer or Prisma connection pooling:
datasource db {
provider = "postgresql"
url = env ( "DATABASE_URL" )
// Enable connection pooling
// directUrl = env("DIRECT_DATABASE_URL")
}
Monitoring
Query Logging
Enable Prisma query logs:
const prisma = new PrismaClient ({
log: [ 'query' , 'info' , 'warn' , 'error' ],
})
Slow Query Analysis
Identify slow queries in PostgreSQL:
-- Enable slow query logging
ALTER DATABASE macondo_links SET log_min_duration_statement = 1000 ;
-- View slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10 ;
Database Size
Check database size:
SELECT
pg_size_pretty(pg_database_size( 'macondo_links' )) as database_size,
pg_size_pretty(pg_total_relation_size( 'links' )) as links_table_size,
pg_size_pretty(pg_total_relation_size( 'clicks' )) as clicks_table_size;
Troubleshooting
Migration failed: column already exists
Reset migration state: # Mark migration as applied without running it
npx prisma migrate resolve --applied < migration_nam e >
Prisma Client out of sync
Regenerate the client: cd api
npx prisma generate
Or in Docker: docker-compose exec api npx prisma generate
Cannot connect to database
Check connection string: # Test connection
psql $DATABASE_URL
# Verify DATABASE_URL format
echo $DATABASE_URL
# Should be: postgresql://user:password@host:port/database
Database locked or slow queries
Check for long-running queries: SELECT pid, now () - pg_stat_activity . query_start AS duration, query
FROM pg_stat_activity
WHERE state = 'active'
ORDER BY duration DESC ;
Kill a stuck query: SELECT pg_terminate_backend(pid);
Migration conflicts in team
If multiple developers create migrations: # Pull latest migrations
git pull
# Reset and reapply
npx prisma migrate reset
# Or manually resolve conflicts in migration files
Best Practices
Schema Design
Use UUIDs for primary keys (better for distributed systems)
Add createdAt and updatedAt to all models
Use @map and @@map for snake_case database conventions
Add indexes on foreign keys and frequently queried fields
Use @db.Text for unlimited length strings
Migrations
Always review generated SQL before applying
Never edit migration files after applying
Use descriptive migration names
Test migrations on staging before production
Backup database before running migrations
Queries
Use select to fetch only needed fields
Use include for relations instead of separate queries
Leverage Prisma’s type safety
Use transactions for multi-step operations
Avoid N+1 queries with proper includes
Security
Never log sensitive data
Use parameterized queries (Prisma does this automatically)
Restrict database user permissions
Use SSL for database connections in production
Regularly update Prisma and dependencies
Next Steps