Skip to main content

Overview

The Exness Trading Platform uses three database systems, each optimized for specific use cases:

PostgreSQL

Primary database for user accounts, orders, and transactional data

TimescaleDB

Time-series database for historical market data and candles

MongoDB

Document store for account snapshots and backups

PostgreSQL Configuration

Database Information

Image
postgres:16-alpine
Official PostgreSQL 16 Alpine Linux image for minimal footprint.
Container Name
exness-postgres
Docker container identifier.
Port
5434:5432
Maps host port 5434 to container port 5432 to avoid conflicts with local PostgreSQL.
Database
exness
Primary database name.
Credentials
postgresql / postgresql
Default username and password for development.
Change these credentials in production. Never use default credentials.

Connection String

DATABASE_URL=postgresql://postgresql:postgresql@localhost:5434/exness

Health Check

PostgreSQL includes a health check to ensure the database is ready:
healthcheck:
  test: ["CMD-SHELL", "pg_isready -U postgresql -d exness"]
  interval: 10s
  timeout: 5s
  retries: 5
Services wait for this health check to pass before starting.

Prisma Schema

The platform uses Prisma ORM for type-safe database operations. The schema is defined in packages/db/prisma/schema.prisma.

Schema Overview

generator client {
  provider = "prisma-client-js"
  output   = "../generated/prisma"
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

User Model

Stores user account information:
model User {
  id      Int     @id @default(autoincrement())
  userID  String  @unique
  email   String  @unique
  balance Float   @default(0)
  Orders  Orders[]
}
id
Int
Auto-incrementing primary key.
userID
String
Unique user identifier for application logic.
email
String
Unique email address for authentication.
balance
Float
User’s account balance in USD. Defaults to 0.
Orders
Orders[]
One-to-many relationship with Orders model.

Orders Model

Stores trading order information:
model Orders {
  orderId     String    @id
  userId      String
  user        User      @relation(fields: [userId], references: [userID], onDelete: Cascade)
  symbol      Symbol
  type        OrderSide
  quantity    Float
  leverage    Int
  takeProfit  Float?
  stopLoss    Float?
  stippage    Float?
  openPrice   Float
  closePrice  Float
  openTime    DateTime
  closeTime   DateTime
  profitLoss  Float

  @@index([userId])
}
orderId
String
Unique order identifier (primary key).
userId
String
Foreign key referencing User.userID with cascade delete.
symbol
Symbol enum
Trading pair: btc, sol, or eth.
type
OrderSide enum
Order direction: buy or sell.
quantity
Float
Order quantity in base currency.
leverage
Int
Leverage multiplier (1x, 2x, 5x, etc.).
takeProfit
Float?
Optional take profit price level.
stopLoss
Float?
Optional stop loss price level.
openPrice
Float
Price at which the order was opened.
closePrice
Float
Price at which the order was closed.
profitLoss
Float
Realized profit or loss in USD.

Enums

enum Symbol {
  btc
  sol
  eth
}

enum OrderSide {
  buy
  sell
}

Database Migrations

The platform uses Prisma Migrate for database schema management.

Migration Process

1

Migration Service

The db-migrate service runs automatically on startup:
db-migrate:
  command: >
    sh -c "
      echo 'Waiting for PostgreSQL...' &&
      until bun run --silent -e 'import pkg from \"pg\"; const {Client} = pkg; const client = new Client({connectionString: process.env.DATABASE_URL}); await client.connect(); await client.end();' 2>/dev/null; do
        sleep 2;
      done &&
      echo 'PostgreSQL is ready!' &&
      cd /app/packages/db &&
      echo 'Running Prisma migrations...' &&
      bun run db:deploy &&
      echo 'Migrations completed!'
    "
2

Wait for Database

The migration service waits for PostgreSQL to be healthy before proceeding.
3

Run Migrations

Executes bun run db:deploy which runs prisma migrate deploy.This applies all pending migrations to the database.
4

Signal Completion

The service exits successfully, triggering dependent services to start.

Migration History

The platform includes the following migrations:
-- CreateTable
CREATE TABLE "User" (
    "id" SERIAL PRIMARY KEY,
    "userID" TEXT NOT NULL UNIQUE,
    "email" TEXT NOT NULL UNIQUE
);

Manual Migration Commands

# In development
cd packages/db
bun run db:migrate

# This runs: prisma migrate dev --skip-generate
Never run prisma migrate reset in production. This will delete all data in the database.

TimescaleDB Configuration

Database Information

Image
timescale/timescaledb:latest-pg16
TimescaleDB with PostgreSQL 16 for time-series data.
Container Name
exness-timescaledb
Docker container identifier.
Port
5433:5432
Maps host port 5433 to container port 5432.
Database
mydb
TimescaleDB database name.
Credentials
myuser / mypassword
Default username and password for development.

Connection Configuration

TIMESCALE_DB_USER=myuser
TIMESCALE_DB_PASSWORD=mypassword
TIMESCALE_DB_HOST=timescaledb  # Docker service name
TIMESCALE_DB_PORT=5432         # Internal port
TIMESCALE_DB_NAME=mydb

TimescaleDB Features

TimescaleDB uses hypertables for efficient time-series storage:
-- Create candle data table
CREATE TABLE candles (
  time TIMESTAMPTZ NOT NULL,
  symbol TEXT NOT NULL,
  open DOUBLE PRECISION NOT NULL,
  high DOUBLE PRECISION NOT NULL,
  low DOUBLE PRECISION NOT NULL,
  close DOUBLE PRECISION NOT NULL,
  volume DOUBLE PRECISION NOT NULL
);

-- Convert to hypertable
SELECT create_hypertable('candles', 'time');
Pre-computed aggregations for fast chart loading:
-- 1-minute candles
CREATE MATERIALIZED VIEW candles_1m
WITH (timescaledb.continuous) AS
SELECT
  time_bucket('1 minute', time) AS bucket,
  symbol,
  first(open, time) AS open,
  max(high) AS high,
  min(low) AS low,
  last(close, time) AS close,
  sum(volume) AS volume
FROM candles
GROUP BY bucket, symbol;

-- 5-minute candles
CREATE MATERIALIZED VIEW candles_5m
WITH (timescaledb.continuous) AS
SELECT
  time_bucket('5 minutes', time) AS bucket,
  symbol,
  first(open, time) AS open,
  max(high) AS high,
  min(low) AS low,
  last(close, time) AS close,
  sum(volume) AS volume
FROM candles
GROUP BY bucket, symbol;
Automatic data cleanup for old records:
-- Drop data older than 90 days
SELECT add_retention_policy('candles', INTERVAL '90 days');
Compress old data to save space:
-- Enable compression
ALTER TABLE candles SET (
  timescaledb.compress,
  timescaledb.compress_segmentby = 'symbol'
);

-- Compress data older than 7 days
SELECT add_compression_policy('candles', INTERVAL '7 days');

MongoDB Configuration

Database Information

Image
mongo:7.0
Official MongoDB 7.0 image.
Container Name
exness-mongodb
Docker container identifier.
Port
27017:27017
Standard MongoDB port.
Database
exness_snapshots
Database for account snapshots.
Credentials
admin / admin123
Root credentials for MongoDB.

Connection String

MONGODB_URL='mongodb://admin:admin123@localhost:27017/exness_snapshots?authSource=admin'
The connection string must be enclosed in quotes due to special characters. The authSource=admin parameter is required for authentication.

Prisma Studio

A visual database management tool is available at http://localhost:5555

Accessing Prisma Studio

1

Ensure Services are Running

docker compose ps
Verify exness-prisma-studio is running.
2

Open Browser

3

Explore Data

  • View all User records
  • Browse Orders with filters
  • Edit data directly (development only)
  • Execute custom queries
Prisma Studio should only be used in development. Never expose it in production environments.

Backup and Restore

PostgreSQL Backup

# Backup to file
docker compose exec postgres pg_dump -U postgresql exness > backup.sql

# Backup with compression
docker compose exec postgres pg_dump -U postgresql exness | gzip > backup.sql.gz

TimescaleDB Backup

# Backup TimescaleDB
docker compose exec timescaledb pg_dump -U myuser mydb > timescale_backup.sql

# Restore TimescaleDB
docker compose exec -T timescaledb psql -U myuser mydb < timescale_backup.sql

MongoDB Backup

# Backup MongoDB
docker compose exec mongodb mongodump \
  --username admin \
  --password admin123 \
  --authenticationDatabase admin \
  --db exness_snapshots \
  --out /data/backup

# Restore MongoDB
docker compose exec mongodb mongorestore \
  --username admin \
  --password admin123 \
  --authenticationDatabase admin \
  --db exness_snapshots \
  /data/backup/exness_snapshots

Troubleshooting

Check migration logs:
docker compose logs db-migrate
Common issues:
  • PostgreSQL not ready (increase retries)
  • Connection string incorrect
  • Missing migration files
Manually run migrations:
docker compose run --rm db-migrate
Verify database is healthy:
docker compose ps postgres
docker compose logs postgres

# Test connection
docker compose exec postgres psql -U postgresql -d exness -c "SELECT 1;"
Check port conflicts:
# Port 5434 should be available
lsof -i :5434
Verify TimescaleDB extension:
docker compose exec timescaledb psql -U myuser -d mydb -c "SELECT * FROM pg_extension WHERE extname = 'timescaledb';"
Check connection:
docker compose exec timescaledb pg_isready -U myuser -d mydb
Ensure migrations completed:
docker compose logs db-migrate
docker compose logs prisma-studio
Prisma Studio depends on successful migrations.

Next Steps

Environment Variables

Configure database connection strings

Build docs developers (and LLMs) love