Overview
TrailBase’s performance advantage comes from:Embedded Database
SQLite runs in-process with zero network latency
Rust Performance
Native code with minimal overhead and efficient memory usage
Single Process
No RPC calls or service mesh complexity
Optimized SQLite
Custom extensions and connection pooling
Benchmark Methodology
TrailBase benchmarks are conducted using:- Hardware: Modern server-grade hardware
- Workload: Realistic CRUD operations and queries
- Concurrency: Multiple concurrent connections
- Measurements: p50, p90, p95, and p99 latencies
- Tools: Industry-standard load testing tools
The benchmark repository is kept separate from the main repository due to external dependencies.
Latency Results
TrailBase consistently delivers sub-millisecond latencies for common operations:Record API Operations
Typical latencies for Record API CRUD operations:| Operation | p50 | p90 | p95 | p99 |
|---|---|---|---|---|
| Read single record | <0.5ms | <1ms | <2ms | <5ms |
| List records (10) | <0.8ms | <2ms | <3ms | <8ms |
| Create record | <1ms | <3ms | <5ms | <10ms |
| Update record | <1ms | <3ms | <5ms | <10ms |
| Delete record | <0.8ms | <2ms | <4ms | <8ms |
Actual performance depends on schema complexity, access rules, and hardware. These are representative figures from benchmark tests.
Realtime Subscriptions
| Metric | Value |
|---|---|
| Connection establishment | <100ms |
| Change notification latency | <5ms |
| Concurrent subscriptions | 1000s |
| Filtered subscription overhead | <1ms |
Authentication Operations
| Operation | p50 | p99 |
|---|---|---|
| Login (password) | <5ms | <20ms |
| Token validation | <0.5ms | <2ms |
| Token refresh | <3ms | <15ms |
| OAuth callback | <50ms | <200ms |
Password-based operations include cryptographic hashing (Argon2/bcrypt) which is intentionally computationally expensive for security.
Throughput
TrailBase can handle significant throughput on modest hardware:Read-Heavy Workloads
- Concurrent reads: Excellent scalability with multiple readers
- Throughput: 10,000+ reads/second on modern hardware
- No read contention: SQLite allows unlimited concurrent readers
Write Workloads
- Write throughput: 1,000-5,000 writes/second (single DB)
- Multi-DB: Scales linearly with number of independent databases
- Batch operations: Transactional batches improve write efficiency
Resource Usage
TrailBase is extremely resource-efficient:Memory
- Idle: ~50MB base memory footprint
- Under load: Scales with connection count and cache size
- Configurable: SQLite cache size can be tuned
CPU
- Minimal overhead: Direct function calls, no RPC serialization
- Efficient: Rust’s zero-cost abstractions
- Scalable: Utilizes multiple cores for concurrent operations
Disk I/O
- SQLite optimizations: Write-ahead logging (WAL) for concurrency
- Efficient storage: Compact database format
- Minimal overhead: No separate cache layer needed
Comparison with Other Backends
TrailBase vs Traditional Backend Stack
Comparing a typical three-tier architecture (App Server + Database + Cache):| Metric | TrailBase | Traditional Stack |
|---|---|---|
| Network hops | 0 (embedded) | 2-3 (app→db, app→cache) |
| Typical p50 latency | <0.5ms | 10-50ms |
| Infrastructure complexity | Single binary | Multiple services |
| Cache invalidation | Not needed | Complex logic |
| Deployment | Copy one file | Orchestration required |
Traditional stacks include network latency between components. Even with co-located services, network round-trips add 1-10ms overhead per hop.
TrailBase vs Firebase
| Feature | TrailBase | Firebase |
|---|---|---|
| Latency | Sub-millisecond | 50-200ms (network dependent) |
| Self-hosted | ✅ Yes | ❌ Cloud-only |
| Offline-first | SQLite replication | Firestore SDK |
| SQL support | ✅ Full SQL | ❌ NoSQL only |
| Cost | Free (self-hosted) | Pay per usage |
| Data ownership | ✅ Full control | Google infrastructure |
TrailBase vs Supabase
| Feature | TrailBase | Supabase |
|---|---|---|
| Database | SQLite (embedded) | PostgreSQL (network) |
| Latency | Sub-millisecond | 5-50ms (network dependent) |
| Deployment | Single binary | Multiple services |
| Realtime | Built-in | Requires separate service |
| Resource usage | ~50MB baseline | ~500MB+ baseline |
| Scaling model | Vertical + multi-DB | Horizontal (PostgreSQL) |
TrailBase vs PocketBase
| Feature | TrailBase | PocketBase |
|---|---|---|
| Language | Rust | Go |
| Extensibility | WASM (JS/TS/Rust) | Go plugins |
| Type safety | Generated types | Generated types |
| Geospatial | ✅ First-class | Limited |
| Multi-DB | ✅ Yes | Single DB |
| Admin UI | Comprehensive | Comprehensive |
For detailed feature comparisons, see our comparisons page.
Real-World Performance
TrailBase’s sub-millisecond latencies enable unique capabilities:No Cache Needed
Traditional Architecture
Traditional Architecture
TrailBase Architecture
TrailBase Architecture
Realtime Applications
Sub-millisecond latencies make TrailBase ideal for:- Collaborative apps: Google Docs-style editing
- Real-time dashboards: Live metrics and monitoring
- Multiplayer games: Low-latency state synchronization
- Chat applications: Instant message delivery
- Live location tracking: Geospatial queries with instant updates
Edge Deployment
Small footprint enables edge deployment:- Deploy close to users for minimal network latency
- Run on resource-constrained edge nodes
- Regional databases for data sovereignty
- Mobile/desktop embedded applications
WASM Performance
TrailBase’s WASM runtime introduces minimal overhead:| Metric | Native (control) | WASM Overhead |
|---|---|---|
| Simple endpoint | ~0.3ms | +0.1-0.2ms |
| Database query | ~0.5ms | +0.1-0.2ms |
| Complex computation | Varies | 1.5-2x (depends on workload) |
WASM overhead is minimal for I/O-bound operations. CPU-intensive computations may see 1.5-2x slowdown compared to native code.
WASM Language Performance
- Rust guest code: Near-native performance (~5% overhead)
- JavaScript/TypeScript: Slower due to SpiderMonkey interpreter bundle
- Component size: JS/TS components are larger (~10MB) vs Rust (~1MB)
Optimization Tips
Maximize TrailBase performance:Index Your Queries
Create SQLite indexes for frequently queried columns
Use Multi-DB for Writes
Distribute high-write workloads across multiple databases
Optimize Access Rules
Simple access rules reduce query overhead
Batch Operations
Use transactional batches for multiple writes
Tune SQLite Cache
Adjust cache_size pragma for your workload
Profile Your Queries
Use EXPLAIN QUERY PLAN to optimize slow queries
Running Benchmarks
To run benchmarks yourself:-
Clone the benchmark repository:
- Follow the README instructions to set up and run tests
- Compare results with your specific workload
Benchmarks are kept in a separate repository due to external dependencies. Contributions and improvements to benchmarks are welcome!
Continuous Benchmarking
The TrailBase team runs continuous performance regression tests:- Automated benchmarks on each release
- Performance tracked over time
- Regression detection in CI/CD pipeline
- Community can submit benchmark improvements
Conclusion
TrailBase’s sub-millisecond latencies are not just theoretical - they’re consistent, measurable, and achievable in real-world deployments. The single-process, embedded database architecture eliminates network overhead and infrastructure complexity while delivering exceptional performance. For most applications, TrailBase’s performance makes caching unnecessary, simplifying architecture and eliminating entire classes of consistency problems.See TrailBase in Action
Try the live demo and experience the performance yourself