Skip to main content

Overview

The bot uses a dual logging system:
  1. Application Logs (logger.js): Structured JSON logs for events, errors, and state changes
  2. Tick Logger (tick-logger.js): High-frequency price data capture in JSONL format
Both use daily file rotation and write to the data/ directory.

Application Logger

Design

  • Levels: INFO, WARN, ERROR
  • Output: Both console and file (best-effort)
  • Format: [timestamp] [level] message {context}
  • File Pattern: data/logs/bot-YYYY-MM-DD.log

Implementation

import { appendFileSync, mkdirSync } from 'fs'
import { fileURLToPath } from 'url'
import { dirname, join } from 'path'

const __dirname = dirname(fileURLToPath(import.meta.url))
const LOG_DIR = join(__dirname, '..', '..', 'data', 'logs')

function getLogPath() {
  const date = new Date().toISOString().split('T')[0]
  return join(LOG_DIR, `bot-${date}.log`)
}

function write(level, message, ctx) {
  const ts = new Date().toISOString()
  const line = ctx
    ? `[${ts}] [${level}] ${message} ${JSON.stringify(ctx)}`
    : `[${ts}] [${level}] ${message}`

  // Console output
  if (level === 'ERROR') {
    process.stderr.write(line + '\n')
  } else if (level === 'WARN') {
    process.stderr.write(line + '\n')
  } else {
    process.stdout.write(line + '\n')
  }

  // File output (best-effort)
  try {
    mkdirSync(LOG_DIR, { recursive: true })
    appendFileSync(getLogPath(), line + '\n')
  } catch {
    // If file logging fails, at least console output works
  }
}

const logger = {
  info:  (message, ctx) => write('INFO',  message, ctx),
  warn:  (message, ctx) => write('WARN',  message, ctx),
  error: (message, ctx) => write('ERROR', message, ctx),
}

export default logger

Usage

import logger from './logger/logger.js'

// Simple message
logger.info('Bot started')
logger.warn('Low balance detected')
logger.error('WebSocket connection failed')

// With structured context
logger.info('Interval closed', {
  index: 42,
  result: 'UP',
  betSize: 125.00,
  profit: 12.50
})

logger.warn('High drawdown', {
  level: 'yellow',
  drawdownPct: 8.5,
  bankroll: 4575.00
})

logger.error('API request failed', {
  endpoint: 'https://api.vatic.ai/strike',
  status: 503,
  retries: 3
})

Example Output

Console

[2024-03-15T14:30:00.123Z] [INFO] Bot started
[2024-03-15T14:30:01.456Z] [INFO] Loaded 127 previous intervals
[2024-03-15T14:35:00.789Z] [INFO] Interval closed {"index":128,"result":"UP","betSize":125,"profit":12.5}
[2024-03-15T14:40:00.012Z] [WARN] High drawdown {"level":"yellow","drawdownPct":8.5,"bankroll":4575}
[2024-03-15T14:45:00.345Z] [ERROR] API request failed {"endpoint":"https://api.vatic.ai/strike","status":503,"retries":3}

File (data/logs/bot-2024-03-15.log)

Identical format to console, but all messages (INFO/WARN/ERROR) go to the same file.

Log Rotation

Files rotate automatically at midnight UTC:
data/logs/
├── bot-2024-03-13.log  (38 MB, 450k lines)
├── bot-2024-03-14.log  (42 MB, 485k lines)
└── bot-2024-03-15.log  (12 MB, 140k lines, current)
Retention Strategy: Use logrotate or a cron job to compress/delete old logs:
# Compress logs older than 7 days
find data/logs -name "bot-*.log" -mtime +7 -exec gzip {} \;

# Delete compressed logs older than 30 days
find data/logs -name "bot-*.log.gz" -mtime +30 -delete

Error Handling

Best-Effort File Writes: If file logging fails (e.g., disk full, permission error), the logger continues with console output only:
try {
  mkdirSync(LOG_DIR, { recursive: true })
  appendFileSync(getLogPath(), line + '\n')
} catch {
  // Silent failure — console output still works
}
Production Deployment: Monitor disk space on the log volume. A full disk will cause file writes to fail silently.

Tick Logger

Design

  • Purpose: Capture every BTC price tick (1 per second) for backtesting and analysis
  • Format: JSONL (newline-delimited JSON)
  • File Pattern: data/ticks-YYYY-MM-DD.jsonl
  • Compression: Highly compressible (repetitive structure)

Implementation

import { appendFileSync, mkdirSync } from 'fs'
import { fileURLToPath } from 'url'
import { dirname, join } from 'path'

const __dirname = dirname(fileURLToPath(import.meta.url))
const DATA_DIR = join(__dirname, '..', '..', 'data')

class TickLogger {
  constructor() {
    this._currentDate = null
    this._currentPath = null
  }

  write({ timestamp, price }) {
    const date = new Date().toISOString().split('T')[0]

    // Rotate if date changed
    if (date !== this._currentDate) {
      this._currentDate = date
      this._currentPath = join(DATA_DIR, `ticks-${date}.jsonl`)
      mkdirSync(DATA_DIR, { recursive: true })
    }

    const line = JSON.stringify({ ts: timestamp, price }) + '\n'
    appendFileSync(this._currentPath, line)
  }
}

export default new TickLogger()

Usage

import tickLogger from './logger/tick-logger.js'

// In main loop (runs every second)
setInterval(() => {
  const price = await priceSource.getPrice()
  const timestamp = Date.now()
  
  tickLogger.write({ timestamp, price })
  
  // ... rest of bot logic
}, 1000)

File Format

JSONL Structure

Each line is a standalone JSON object:
{"ts":1678901400000,"price":24150.00}
{"ts":1678901401000,"price":24150.25}
{"ts":1678901402000,"price":24149.75}
{"ts":1678901403000,"price":24150.50}
{"ts":1678901404000,"price":24151.00}
Fields:
  • ts (number): Unix timestamp in milliseconds
  • price (number): BTC/USD price

Example File (data/ticks-2024-03-15.jsonl)

$ wc -l data/ticks-2024-03-15.jsonl
86400 data/ticks-2024-03-15.jsonl  # 1 tick/sec × 60 sec × 60 min × 24 hr = 86,400 lines/day

$ du -h data/ticks-2024-03-15.jsonl
3.5M data/ticks-2024-03-15.jsonl  # ~40 bytes/line × 86,400 lines ≈ 3.5 MB/day

$ gzip -9 data/ticks-2024-03-15.jsonl
$ du -h data/ticks-2024-03-15.jsonl.gz
580K data/ticks-2024-03-15.jsonl.gz  # ~84% compression (highly repetitive structure)

Analysis Examples

Load Ticks in Python

import json
from datetime import datetime

ticks = []
with open('data/ticks-2024-03-15.jsonl') as f:
    for line in f:
        tick = json.loads(line)
        ticks.append({
            'timestamp': datetime.fromtimestamp(tick['ts'] / 1000),
            'price': tick['price']
        })

print(f"Loaded {len(ticks)} ticks")
print(f"Price range: ${min(t['price'] for t in ticks):.2f} - ${max(t['price'] for t in ticks):.2f}")

Compute 5-Min OHLC Bars

import pandas as pd

# Load ticks
df = pd.read_json('data/ticks-2024-03-15.jsonl', lines=True)
df['timestamp'] = pd.to_datetime(df['ts'], unit='ms')
df.set_index('timestamp', inplace=True)

# Resample to 5-min bars
ohlc = df['price'].resample('5T').ohlc()
print(ohlc.head())

# Output:
#                          open      high       low     close
# timestamp
# 2024-03-15 00:00:00  24150.00  24152.50  24148.00  24150.25
# 2024-03-15 00:05:00  24150.25  24155.00  24150.00  24153.75
# 2024-03-15 00:10:00  24153.75  24158.25  24152.50  24157.00

Search for Price Spikes

# Find all ticks where price jumped >$50 in 1 second
jq -s 'to_entries | .[] | select(.key > 0 and (.value.price - .[.key-1].price | fabs) > 50) | .value' data/ticks-2024-03-15.jsonl

# Output:
# {"ts":1678901423000,"price":24201.50}  # Spike: $24150 -> $24201.50 (+$51.50)
# {"ts":1678905789000,"price":24088.25}  # Drop: $24145.00 -> $24088.25 (-$56.75)

Compression Strategy

# Compress yesterday's tick file
DATE=$(date -d yesterday +%Y-%m-%d)
gzip -9 data/ticks-${DATE}.jsonl

# Automated daily compression (cron)
0 1 * * * gzip -9 /path/to/data/ticks-$(date -d yesterday +\%Y-\%m-\%d).jsonl
Storage Math:
  • Uncompressed: 3.5 MB/day × 365 days = 1.3 GB/year
  • Compressed (gzip): 580 KB/day × 365 days = 206 MB/year
High-frequency tick data is cheap to store long-term.

Centralized Logging Integration

Ship Logs to Elasticsearch

import logger from './logger/logger.js'
import { Client } from '@elastic/elasticsearch'

const esClient = new Client({ node: 'http://localhost:9200' })

function write(level, message, ctx) {
  const ts = new Date().toISOString()
  const line = ctx ? `[${ts}] [${level}] ${message} ${JSON.stringify(ctx)}` : `[${ts}] [${level}] ${message}`
  
  // Console + file (original behavior)
  // ... (console.log, appendFileSync)
  
  // Also ship to Elasticsearch
  esClient.index({
    index: `polymarket-bot-logs-${new Date().toISOString().split('T')[0]}`,
    document: {
      timestamp: ts,
      level,
      message,
      context: ctx || {},
      host: process.env.HOSTNAME
    }
  }).catch(err => console.error('Failed to ship log to ES:', err))
}

Use Winston for Production

Replace the minimal logger with Winston for advanced features:
import winston from 'winston'

const logger = winston.createLogger({
  level: 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.json()
  ),
  transports: [
    new winston.transports.Console(),
    new winston.transports.File({ 
      filename: 'data/logs/bot.log',
      maxsize: 10 * 1024 * 1024,  // 10 MB
      maxFiles: 10                // Keep 10 files (100 MB total)
    }),
    new winston.transports.Http({
      host: 'logs.example.com',
      port: 8080,
      path: '/logs'
    })
  ]
})

export default logger

Log Analysis

Parse Structured Context

# Extract all interval close events
grep '"Interval closed"' data/logs/bot-2024-03-15.log | jq -r '.context | "\(.index) \(.result) \(.profit)"'

# Output:
# 128 UP 12.50
# 129 DOWN -100.00
# 130 UP 8.25

Count Errors by Type

# Count ERROR lines by message
grep '\[ERROR\]' data/logs/bot-*.log | awk -F'] ' '{print $3}' | cut -d' ' -f1-3 | sort | uniq -c | sort -rn

# Output:
#  42 API request failed
#  15 WebSocket connection failed
#   8 Insufficient balance
#   3 Rate limit exceeded

Daily Summary Report

#!/bin/bash
LOG_FILE="data/logs/bot-$(date +%Y-%m-%d).log"

echo "=== Daily Summary ==="
echo "Total lines: $(wc -l < "$LOG_FILE")"
echo "INFO:  $(grep -c '\[INFO\]' "$LOG_FILE")"
echo "WARN:  $(grep -c '\[WARN\]' "$LOG_FILE")"
echo "ERROR: $(grep -c '\[ERROR\]' "$LOG_FILE")"
echo
echo "=== Most Common Messages ==="
awk -F'\] \[' '{print $3}' "$LOG_FILE" | cut -d' ' -f1-4 | sort | uniq -c | sort -rn | head -10

Debugging with Logs

Trace Interval Lifecycle

# Follow logs in real-time
tail -f data/logs/bot-$(date +%Y-%m-%d).log

# Example output:
[2024-03-15T14:30:00.123Z] [INFO] Interval 42 started
[2024-03-15T14:30:15.456Z] [INFO] Prediction captured {"probability":0.6843,"direction":"UP"}
[2024-03-15T14:31:00.789Z] [INFO] Early prediction snapshot {"betSize":125,"evAtCapture":0.0843}
[2024-03-15T14:34:30.012Z] [INFO] Final prediction snapshot {"probability":0.7021,"direction":"UP"}
[2024-03-15T14:35:00.345Z] [INFO] Interval closed {"index":42,"result":"UP","profit":12.5}

Reproduce Production Issue

# Extract all context from a failed interval
grep 'index":42' data/logs/bot-2024-03-15.log | jq -r '.context'

# Output:
{
  "index": 42,
  "result": "DOWN",
  "earlyPrediction": { "probability": 0.6843, "direction": "UP" },
  "earlyPredictionCorrect": false,
  "betSize": 125.00,
  "profit": -125.00,
  "drawdownLevel": "yellow",
  "abstentionReason": null
}

Performance Monitoring

Log Volume

ComponentLogs/SecondDaily Volume (Uncompressed)
Application1-55-20 MB
Tick Logger13.5 MB
Total2-68.5-23.5 MB/day

Disk I/O Impact

  • Synchronous writes: appendFileSync() blocks the event loop (~1ms/write)
  • Acceptable for low frequency: Application logs are sparse (1-5/sec)
  • High frequency: Tick logger writes every second but payload is tiny (40 bytes)
Production Optimization: For extreme write frequency (>100 Hz), use a write buffer or switch to asynchronous appendFile() with a queue.

Interval Tracking

State machine generating log events

History Store

Persistent JSON storage for intervals

Metrics

Analyze performance from logs

Build docs developers (and LLMs) love