Overview
Batch processors collect telemetry data in memory and export it in batches, reducing network overhead and improving performance. OpenTelemetry Rust provides batch processors for spans and logs.
Batch processing is recommended for production deployments. Use simple processors only for development and debugging.
Processor types
Simple processors
Characteristics:
Export immediately on span/log end
Synchronous or async depending on exporter
No batching or buffering
Suitable for development only
Use cases:
Local development
Debugging and testing
Low-traffic applications
When immediate export is required
Batch processors
Characteristics:
Buffer telemetry in memory
Export in configurable batches
Async export with runtime support
Automatic retry and timeout handling
Graceful shutdown support
Use cases:
Production deployments
High-throughput applications
Network-efficient export
When export latency is acceptable
BatchSpanProcessor
Basic usage
use opentelemetry_sdk :: {
trace :: { TracerProvider , BatchSpanProcessor },
runtime :: Tokio ,
};
use opentelemetry_otlp :: SpanExporter ;
#[tokio :: main]
async fn main () -> Result <(), Box < dyn std :: error :: Error >> {
let exporter = SpanExporter :: builder ()
. with_http ()
. build () ? ;
let batch_processor = BatchSpanProcessor :: builder ( exporter , Tokio )
. build ();
let provider = TracerProvider :: builder ()
. with_span_processor ( batch_processor )
. build ();
Ok (())
}
Configuration
Customize batch behavior with BatchConfig:
use opentelemetry_sdk :: {
trace :: { BatchSpanProcessor , BatchConfig },
runtime :: Tokio ,
};
use std :: time :: Duration ;
let batch_config = BatchConfig :: default ()
. with_max_queue_size ( 4096 ) // Max spans in queue
. with_max_export_batch_size ( 512 ) // Max spans per export
. with_scheduled_delay ( Duration :: from_secs ( 5 )) // Export interval
. with_max_export_timeout ( Duration :: from_secs ( 30 )); // Export timeout
let batch_processor = BatchSpanProcessor :: builder ( exporter , Tokio )
. with_batch_config ( batch_config )
. build ();
BatchConfig options
Option Default Environment Variable Description max_queue_size2048 OTEL_BSP_MAX_QUEUE_SIZEMaximum spans in queue max_export_batch_size512 OTEL_BSP_MAX_EXPORT_BATCH_SIZESpans per export batch scheduled_delay5s OTEL_BSP_SCHEDULE_DELAYTime between exports max_export_timeout30s OTEL_BSP_EXPORT_TIMEOUTTimeout per export
Environment variable configuration
export OTEL_BSP_MAX_QUEUE_SIZE = 4096
export OTEL_BSP_MAX_EXPORT_BATCH_SIZE = 1024
export OTEL_BSP_SCHEDULE_DELAY = 10000 # milliseconds
export OTEL_BSP_EXPORT_TIMEOUT = 60000 # milliseconds
BatchLogProcessor
Basic usage
use opentelemetry_sdk :: {
logs :: { LoggerProvider , BatchLogProcessor },
runtime :: Tokio ,
};
use opentelemetry_otlp :: LogExporter ;
#[tokio :: main]
async fn main () -> Result <(), Box < dyn std :: error :: Error >> {
let exporter = LogExporter :: builder ()
. with_http ()
. build () ? ;
let batch_processor = BatchLogProcessor :: builder ( exporter , Tokio )
. build ();
let provider = LoggerProvider :: builder ()
. with_log_processor ( batch_processor )
. build ();
Ok (())
}
Configuration
use opentelemetry_sdk :: {
logs :: { BatchLogProcessor , BatchLogProcessorBuilder , BatchConfig },
runtime :: Tokio ,
};
use std :: time :: Duration ;
let batch_config = BatchConfig :: default ()
. with_max_queue_size ( 2048 )
. with_max_export_batch_size ( 512 )
. with_scheduled_delay ( Duration :: from_secs ( 5 ))
. with_max_export_timeout ( Duration :: from_secs ( 30 ));
let batch_processor = BatchLogProcessor :: builder ( exporter , Tokio )
. with_batch_config ( batch_config )
. build ();
Environment variables
export OTEL_BLRP_MAX_QUEUE_SIZE = 2048
export OTEL_BLRP_MAX_EXPORT_BATCH_SIZE = 512
export OTEL_BLRP_SCHEDULE_DELAY = 1000 # milliseconds
export OTEL_BLRP_EXPORT_TIMEOUT = 30000 # milliseconds
Runtime support
Batch processors require an async runtime:
Tokio runtime
use opentelemetry_sdk :: runtime :: Tokio ;
let processor = BatchSpanProcessor :: builder ( exporter , Tokio ) . build ();
Tokio current-thread runtime
use opentelemetry_sdk :: runtime :: TokioCurrentThread ;
let processor = BatchSpanProcessor :: builder ( exporter , TokioCurrentThread ) . build ();
Shutdown and force flush
Graceful shutdown
Always shut down providers to ensure all telemetry is exported:
use opentelemetry :: global;
// At application shutdown
global :: shutdown_tracer_provider ();
This will:
Stop accepting new spans
Flush remaining spans in queue
Wait for exports to complete
Clean up resources
Force flush
Manually trigger export before timeout:
use opentelemetry_sdk :: trace :: TracerProvider ;
let provider = TracerProvider :: builder ()
. with_span_processor ( batch_processor )
. build ();
// Manually flush
provider . force_flush ();
High throughput
For applications generating many spans:
let batch_config = BatchConfig :: default ()
. with_max_queue_size ( 8192 ) // Larger queue
. with_max_export_batch_size ( 2048 ) // Larger batches
. with_scheduled_delay ( Duration :: from_secs ( 10 )); // Less frequent exports
Low latency
For applications requiring faster export:
let batch_config = BatchConfig :: default ()
. with_max_queue_size ( 1024 ) // Smaller queue
. with_max_export_batch_size ( 256 ) // Smaller batches
. with_scheduled_delay ( Duration :: from_secs ( 1 )); // Frequent exports
Memory constrained
For environments with limited memory:
let batch_config = BatchConfig :: default ()
. with_max_queue_size ( 512 ) // Minimal queue
. with_max_export_batch_size ( 128 ) // Small batches
. with_scheduled_delay ( Duration :: from_secs ( 2 )); // Moderate frequency
Multiple processors
Combine batch and simple processors for different use cases:
use opentelemetry_sdk :: {
trace :: { TracerProvider , BatchSpanProcessor , SimpleSpanProcessor },
runtime :: Tokio ,
};
// Batch processor for production export
let batch_processor = BatchSpanProcessor :: builder ( otlp_exporter , Tokio ) . build ();
// Simple processor for debug logging
let debug_processor = SimpleSpanProcessor :: new ( Box :: new ( stdout_exporter ));
let provider = TracerProvider :: builder ()
. with_span_processor ( batch_processor )
. with_span_processor ( debug_processor )
. build ();
Monitoring batch processors
Queue overflow
When the queue is full, new spans are dropped:
// Increase queue size if you see dropped spans
let batch_config = BatchConfig :: default ()
. with_max_queue_size ( 8192 );
Monitor your application for dropped spans. If the export rate can’t keep up with the span creation rate, increase max_queue_size or reduce span volume.
Export failures
Configure export timeout to handle slow exporters:
let batch_config = BatchConfig :: default ()
. with_max_export_timeout ( Duration :: from_secs ( 60 )); // Longer timeout
Best practices
Use batch processors in production - They provide the best balance of performance and reliability.
Configure based on load - Tune batch sizes and delays based on your application’s telemetry volume.
Always shut down gracefully - Call shutdown_tracer_provider() to ensure all telemetry is exported before exit.
Monitor export metrics - Watch for dropped spans, export errors, and queue depth to optimize configuration.
Common patterns
Development setup
// Simple processor for immediate debugging
let processor = SimpleSpanProcessor :: new ( Box :: new ( stdout_exporter ));
Production setup
// Batch processor with tuned configuration
let batch_config = BatchConfig :: default ()
. with_max_queue_size ( 4096 )
. with_scheduled_delay ( Duration :: from_secs ( 5 ));
let processor = BatchSpanProcessor :: builder ( otlp_exporter , Tokio )
. with_batch_config ( batch_config )
. build ();
Testing setup
// In-memory exporter with simple processor
let processor = SimpleSpanProcessor :: new ( Box :: new ( in_memory_exporter ));
Span processors Learn about span processor types
Log processors Configure log processors