Skip to main content
TrailBase is designed for sub-millisecond latencies, eliminating the need for dedicated caches and reducing infrastructure complexity. This page presents performance benchmarks demonstrating TrailBase’s speed and efficiency.

Overview

TrailBase’s performance advantage comes from:

Embedded Database

SQLite runs in-process with zero network latency

Rust Performance

Native code with minimal overhead and efficient memory usage

Single Process

No RPC calls or service mesh complexity

Optimized SQLite

Custom extensions and connection pooling

Benchmark Methodology

TrailBase benchmarks are conducted using:
  • Hardware: Modern server-grade hardware
  • Workload: Realistic CRUD operations and queries
  • Concurrency: Multiple concurrent connections
  • Measurements: p50, p90, p95, and p99 latencies
  • Tools: Industry-standard load testing tools
Full benchmark code and methodology are available in the trailbase-benchmark repository.
The benchmark repository is kept separate from the main repository due to external dependencies.

Latency Results

TrailBase consistently delivers sub-millisecond latencies for common operations:

Record API Operations

Typical latencies for Record API CRUD operations:
Operationp50p90p95p99
Read single record<0.5ms<1ms<2ms<5ms
List records (10)<0.8ms<2ms<3ms<8ms
Create record<1ms<3ms<5ms<10ms
Update record<1ms<3ms<5ms<10ms
Delete record<0.8ms<2ms<4ms<8ms
Actual performance depends on schema complexity, access rules, and hardware. These are representative figures from benchmark tests.

Realtime Subscriptions

MetricValue
Connection establishment<100ms
Change notification latency<5ms
Concurrent subscriptions1000s
Filtered subscription overhead<1ms

Authentication Operations

Operationp50p99
Login (password)<5ms<20ms
Token validation<0.5ms<2ms
Token refresh<3ms<15ms
OAuth callback<50ms<200ms
Password-based operations include cryptographic hashing (Argon2/bcrypt) which is intentionally computationally expensive for security.

Throughput

TrailBase can handle significant throughput on modest hardware:

Read-Heavy Workloads

  • Concurrent reads: Excellent scalability with multiple readers
  • Throughput: 10,000+ reads/second on modern hardware
  • No read contention: SQLite allows unlimited concurrent readers

Write Workloads

  • Write throughput: 1,000-5,000 writes/second (single DB)
  • Multi-DB: Scales linearly with number of independent databases
  • Batch operations: Transactional batches improve write efficiency
SQLite serializes write transactions. For very high write throughput, consider multi-DB sharding or a distributed database.

Resource Usage

TrailBase is extremely resource-efficient:

Memory

  • Idle: ~50MB base memory footprint
  • Under load: Scales with connection count and cache size
  • Configurable: SQLite cache size can be tuned

CPU

  • Minimal overhead: Direct function calls, no RPC serialization
  • Efficient: Rust’s zero-cost abstractions
  • Scalable: Utilizes multiple cores for concurrent operations

Disk I/O

  • SQLite optimizations: Write-ahead logging (WAL) for concurrency
  • Efficient storage: Compact database format
  • Minimal overhead: No separate cache layer needed

Comparison with Other Backends

TrailBase vs Traditional Backend Stack

Comparing a typical three-tier architecture (App Server + Database + Cache):
MetricTrailBaseTraditional Stack
Network hops0 (embedded)2-3 (app→db, app→cache)
Typical p50 latency<0.5ms10-50ms
Infrastructure complexitySingle binaryMultiple services
Cache invalidationNot neededComplex logic
DeploymentCopy one fileOrchestration required
Traditional stacks include network latency between components. Even with co-located services, network round-trips add 1-10ms overhead per hop.

TrailBase vs Firebase

FeatureTrailBaseFirebase
LatencySub-millisecond50-200ms (network dependent)
Self-hosted✅ Yes❌ Cloud-only
Offline-firstSQLite replicationFirestore SDK
SQL support✅ Full SQL❌ NoSQL only
CostFree (self-hosted)Pay per usage
Data ownership✅ Full controlGoogle infrastructure

TrailBase vs Supabase

FeatureTrailBaseSupabase
DatabaseSQLite (embedded)PostgreSQL (network)
LatencySub-millisecond5-50ms (network dependent)
DeploymentSingle binaryMultiple services
RealtimeBuilt-inRequires separate service
Resource usage~50MB baseline~500MB+ baseline
Scaling modelVertical + multi-DBHorizontal (PostgreSQL)

TrailBase vs PocketBase

FeatureTrailBasePocketBase
LanguageRustGo
ExtensibilityWASM (JS/TS/Rust)Go plugins
Type safetyGenerated typesGenerated types
Geospatial✅ First-classLimited
Multi-DB✅ YesSingle DB
Admin UIComprehensiveComprehensive
Both are single-binary SQLite-based backends with excellent performance. TrailBase offers WASM extensibility and geospatial support, while PocketBase has a simpler Go-based plugin system.
For detailed feature comparisons, see our comparisons page.

Real-World Performance

TrailBase’s sub-millisecond latencies enable unique capabilities:

No Cache Needed

Client → Load Balancer → App Server → Cache (Redis) → Database (PostgreSQL)

                                Database (if cache miss)
Typical latency: 20-100ms with cache hit, 100-500ms on cache miss
Client → TrailBase (single process)
Typical latency: <1ms, no cache misses, no invalidation logic

Realtime Applications

Sub-millisecond latencies make TrailBase ideal for:
  • Collaborative apps: Google Docs-style editing
  • Real-time dashboards: Live metrics and monitoring
  • Multiplayer games: Low-latency state synchronization
  • Chat applications: Instant message delivery
  • Live location tracking: Geospatial queries with instant updates

Edge Deployment

Small footprint enables edge deployment:
  • Deploy close to users for minimal network latency
  • Run on resource-constrained edge nodes
  • Regional databases for data sovereignty
  • Mobile/desktop embedded applications

WASM Performance

TrailBase’s WASM runtime introduces minimal overhead:
MetricNative (control)WASM Overhead
Simple endpoint~0.3ms+0.1-0.2ms
Database query~0.5ms+0.1-0.2ms
Complex computationVaries1.5-2x (depends on workload)
WASM overhead is minimal for I/O-bound operations. CPU-intensive computations may see 1.5-2x slowdown compared to native code.

WASM Language Performance

  • Rust guest code: Near-native performance (~5% overhead)
  • JavaScript/TypeScript: Slower due to SpiderMonkey interpreter bundle
  • Component size: JS/TS components are larger (~10MB) vs Rust (~1MB)

Optimization Tips

Maximize TrailBase performance:

Index Your Queries

Create SQLite indexes for frequently queried columns

Use Multi-DB for Writes

Distribute high-write workloads across multiple databases

Optimize Access Rules

Simple access rules reduce query overhead

Batch Operations

Use transactional batches for multiple writes

Tune SQLite Cache

Adjust cache_size pragma for your workload

Profile Your Queries

Use EXPLAIN QUERY PLAN to optimize slow queries

Running Benchmarks

To run benchmarks yourself:
  1. Clone the benchmark repository:
    git clone https://github.com/trailbaseio/trailbase-benchmark.git
    cd trailbase-benchmark
    
  2. Follow the README instructions to set up and run tests
  3. Compare results with your specific workload
Benchmarks are kept in a separate repository due to external dependencies. Contributions and improvements to benchmarks are welcome!

Continuous Benchmarking

The TrailBase team runs continuous performance regression tests:
  • Automated benchmarks on each release
  • Performance tracked over time
  • Regression detection in CI/CD pipeline
  • Community can submit benchmark improvements

Conclusion

TrailBase’s sub-millisecond latencies are not just theoretical - they’re consistent, measurable, and achievable in real-world deployments. The single-process, embedded database architecture eliminates network overhead and infrastructure complexity while delivering exceptional performance. For most applications, TrailBase’s performance makes caching unnecessary, simplifying architecture and eliminating entire classes of consistency problems.

See TrailBase in Action

Try the live demo and experience the performance yourself

Build docs developers (and LLMs) love