Skip to main content
Chainbench can export detailed benchmark metrics to PostgreSQL with the TimescaleDB extension, enabling advanced time-series analysis, long-term storage, and custom visualization of your load testing results.

Overview

When enabled, Chainbench streams real-time metrics to a TimescaleDB database during benchmark execution. This allows you to:
  • Store metrics from multiple test runs in a centralized database
  • Perform complex queries and aggregations on historical data
  • Build custom dashboards using tools like Grafana
  • Compare performance across different time periods or configurations
  • Retain detailed metrics beyond what CSV/JSON exports provide

Prerequisites

Setting Up TimescaleDB

Use a managed TimescaleDB service:
  • Timescale Cloud
  • Self-hosted on your infrastructure
  • PostgreSQL with TimescaleDB extension installed
Install PostgreSQL and add the TimescaleDB extension:
# Install TimescaleDB (Ubuntu/Debian)
sudo apt install timescaledb-postgresql-16

# Add to PostgreSQL config
echo "shared_preload_libraries = 'timescaledb'" >> postgresql.conf

# Restart PostgreSQL
sudo systemctl restart postgresql

# Create database and enable extension
psql -c "CREATE DATABASE chainbench;"
psql -d chainbench -c "CREATE EXTENSION IF NOT EXISTS timescaledb;"

Basic Usage

Enable TimescaleDB export with the --timescale flag:
chainbench start \
  --profile ethereum.general \
  --target https://your-node.com \
  --timescale \
  --pg-host localhost \
  --pg-port 5432 \
  --pg-username postgres \
  --pg-password your_password \
  --users 100 \
  --test-time 1h \
  --headless --autoquit
All PostgreSQL connection parameters (host, port, username, password) are required when using the --timescale flag. The benchmark will exit with an error if any are missing.

Configuration Options

Required Parameters

When --timescale is enabled, you must provide:
ParameterFlagDefaultDescription
Host--pg-hostNonePostgreSQL server hostname or IP
Port--pg-port5432PostgreSQL server port
Username--pg-usernamepostgresDatabase username
Password--pg-passwordNoneDatabase password

Validation

Chainbench validates the configuration on startup:
if timescale and any(pg_arg is None for pg_arg in (pg_host, pg_port, pg_username, pg_password)):
    click.echo(
        "PG connection parameters are required "
        "when --timescale flag is used: pg_host, pg_port, pg_username, pg_password"
    )
    sys.exit(1)
Missing parameters will cause immediate exit before the test begins.

Database Schema

Chainbench automatically creates and manages the necessary database schema:

Metrics Table

Time-series data for each request:
CREATE TABLE IF NOT EXISTS request_metrics (
  time TIMESTAMPTZ NOT NULL,
  test_run_id TEXT,
  method TEXT,
  response_time DOUBLE PRECISION,
  response_length INTEGER,
  exception TEXT,
  success BOOLEAN
);

SELECT create_hypertable('request_metrics', 'time', if_not_exists => TRUE);

Users Table

Concurrent user count over time:
CREATE TABLE IF NOT EXISTS user_count (
  time TIMESTAMPTZ NOT NULL,
  test_run_id TEXT,
  user_count INTEGER
);

SELECT create_hypertable('user_count', 'time', if_not_exists => TRUE);

Stored Metrics

Per-Request Data

Each API call records:
  • Timestamp: When the request was made
  • Test Run ID: Unique identifier for the benchmark run
  • Method: RPC method called (e.g., eth_blockNumber)
  • Response Time: Request duration in milliseconds
  • Response Length: Size of response in bytes
  • Success: Whether the request succeeded
  • Exception: Error message if the request failed

Aggregate Data

  • User Count: Number of active simulated users at each point in time
  • Run Metadata: Profile name, target endpoint, test parameters

Querying Metrics

Basic Queries

Get average response time by method:
SELECT 
  method,
  AVG(response_time) as avg_response_ms,
  PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY response_time) as p95_response_ms,
  COUNT(*) as total_requests
FROM request_metrics
WHERE test_run_id = 'your-run-id'
GROUP BY method
ORDER BY avg_response_ms DESC;

Time-Series Analysis

Analyze performance over time windows:
SELECT 
  time_bucket('1 minute', time) as bucket,
  method,
  AVG(response_time) as avg_response_ms,
  COUNT(*) as requests_per_minute
FROM request_metrics
WHERE test_run_id = 'your-run-id'
GROUP BY bucket, method
ORDER BY bucket;

Error Analysis

Find requests with errors:
SELECT 
  method,
  exception,
  COUNT(*) as error_count
FROM request_metrics
WHERE test_run_id = 'your-run-id'
  AND success = false
GROUP BY method, exception
ORDER BY error_count DESC;

Advanced Usage

Multiple Test Runs

Compare performance across different configurations:
SELECT 
  test_run_id,
  AVG(response_time) as avg_response_ms,
  COUNT(*) as total_requests,
  SUM(CASE WHEN success THEN 1 ELSE 0 END)::FLOAT / COUNT(*) * 100 as success_rate
FROM request_metrics
WHERE time > NOW() - INTERVAL '7 days'
GROUP BY test_run_id
ORDER BY avg_response_ms;

Continuous Aggregates

Create materialized views for faster queries:
CREATE MATERIALIZED VIEW metrics_hourly
WITH (timescaledb.continuous) AS
SELECT 
  time_bucket('1 hour', time) as bucket,
  test_run_id,
  method,
  AVG(response_time) as avg_response_time,
  COUNT(*) as request_count
FROM request_metrics
GROUP BY bucket, test_run_id, method;

SELECT add_continuous_aggregate_policy('metrics_hourly',
  start_offset => INTERVAL '3 hours',
  end_offset => INTERVAL '1 hour',
  schedule_interval => INTERVAL '1 hour');

Integration with Grafana

TimescaleDB is PostgreSQL-compatible, making it easy to connect with Grafana for real-time dashboards.

Setup Steps

  1. Add Data Source in Grafana:
    • Type: PostgreSQL
    • Host: Your TimescaleDB host
    • Database: chainbench
    • TLS Mode: Configure based on your setup
  2. Create Dashboard with panels for:
    • Response time trends (line chart)
    • Request rate (bar chart)
    • Error rate (stat panel)
    • Concurrent users (area chart)
    • P95/P99 latencies (gauge)

Sample Panel Query

SELECT
  time_bucket('30 seconds', time) AS time,
  method as metric,
  AVG(response_time) as value
FROM request_metrics
WHERE
  $__timeFilter(time)
  AND test_run_id = '$test_run_id'
GROUP BY time_bucket('30 seconds', time), method
ORDER BY time;

Performance Considerations

Database Sizing

Estimate storage requirements:
  • Request Metrics: ~100-200 bytes per record
  • Example: 1000 req/s test for 1 hour = 3.6M records ≈ 360-720 MB

Retention Policies

Implement data retention to manage growth:
SELECT add_retention_policy('request_metrics', INTERVAL '90 days');
Older data is automatically dropped after the retention period.

Compression

Enable compression for older data:
SELECT add_compression_policy('request_metrics', INTERVAL '7 days');
Compression can reduce storage by 10-20x.

Security Best Practices

Never expose database credentials in command history or scripts. Use environment variables instead.

Using Environment Variables

export PG_HOST=localhost
export PG_PORT=5432
export PG_USER=postgres
export PG_PASSWORD=your_password

chainbench start \
  --profile ethereum.general \
  --target https://your-node.com \
  --timescale \
  --pg-host "$PG_HOST" \
  --pg-port "$PG_PORT" \
  --pg-username "$PG_USER" \
  --pg-password "$PG_PASSWORD" \
  --headless --autoquit

Network Security

  • Use TLS/SSL for database connections in production
  • Restrict database access to known IPs using firewall rules
  • Use strong passwords and rotate them regularly
  • Consider using connection pooling for high-throughput tests

Troubleshooting

Connection Failures

If Chainbench can’t connect to the database:
  1. Verify PostgreSQL is running:
    docker ps | grep timescaledb
    # or
    sudo systemctl status postgresql
    
  2. Check connectivity:
    psql -h localhost -U postgres -d chainbench -c "SELECT 1;"
    
  3. Verify TimescaleDB extension:
    SELECT * FROM pg_extension WHERE extname = 'timescaledb';
    

Missing Data

If metrics aren’t appearing in the database:
  • Check that --timescale flag is set
  • Verify all required parameters are provided
  • Look for errors in Chainbench output
  • Check database logs for permission issues

Performance Issues

If the database becomes slow:
  • Create indexes on frequently queried columns
  • Enable compression for older data
  • Use continuous aggregates for complex queries
  • Increase TimescaleDB configuration parameters

Example: Complete Workflow

# 1. Start TimescaleDB
docker run -d --name timescaledb \
  -p 5432:5432 \
  -e POSTGRES_PASSWORD=benchpass \
  timescale/timescaledb:latest-pg16

# 2. Run benchmark with TimescaleDB export
chainbench start \
  --profile ethereum.general \
  --target https://eth-mainnet.example.com \
  --users 100 \
  --test-time 30m \
  --timescale \
  --pg-host localhost \
  --pg-port 5432 \
  --pg-username postgres \
  --pg-password benchpass \
  --run-id eth-mainnet-baseline-001 \
  --headless --autoquit

# 3. Query results
psql -h localhost -U postgres -d chainbench -c "
  SELECT method, AVG(response_time) as avg_ms 
  FROM request_metrics 
  WHERE test_run_id = 'eth-mainnet-baseline-001' 
  GROUP BY method;
"

Build docs developers (and LLMs) love