Installation
Add the dependency to yourCargo.toml:
Feature Flags
The OTLP exporter supports multiple protocols and HTTP clients: Default features:gzip-tonic- gRPC compression with gzipzstd-tonic- gRPC compression with zstdgzip-http- HTTP compression with gzipzstd-http- HTTP compression with zstdtls-ringortls-aws-lc- TLS support for gRPChttp-json- HTTP with JSON encoding
Quick Start
HTTP Binary Protocol
The HTTP binary protocol is the simplest to get started with:gRPC Protocol
For gRPC, you need a Tokio runtime:Complete Example with All Signals
Here’s a complete example that exports traces, metrics, and logs:Configuration
Custom Endpoint
By default, the exporter connects tohttp://localhost:4318 for HTTP and http://localhost:4317 for gRPC. You can customize this:
Environment Variables
The exporter respects standard OpenTelemetry environment variables:OTEL_EXPORTER_OTLP_ENDPOINT- Base endpoint for all signalsOTEL_EXPORTER_OTLP_TRACES_ENDPOINT- Endpoint for tracesOTEL_EXPORTER_OTLP_METRICS_ENDPOINT- Endpoint for metricsOTEL_EXPORTER_OTLP_LOGS_ENDPOINT- Endpoint for logsOTEL_EXPORTER_OTLP_HEADERS- Custom headersOTEL_EXPORTER_OTLP_TIMEOUT- Export timeout
Headers and Authentication
Add custom headers for authentication:Timeout Configuration
Compression
Enable compression to reduce network bandwidth: For gRPC:Integration with Backends
OpenTelemetry Collector
Run the collector with Docker:Jaeger
Jaeger natively supports OTLP:http://localhost:16686
Prometheus
Prometheus can accept OTLP metrics:HTTP vs gRPC
When to Use HTTP
- Simpler setup without async runtime requirements (with blocking client)
- Firewall-friendly (standard HTTP/HTTPS ports)
- Works well with HTTP proxies and load balancers
- JSON format available for debugging
When to Use gRPC
- Better performance for high-throughput scenarios
- Built-in streaming support
- More efficient binary protocol
- Better support for bidirectional communication
Performance Considerations
Use Batch Exporters
Always use batch exporters in production:Enable Compression
Compression can reduce network bandwidth by 60-80%:Tune Batch Configuration
Customize batching for your workload:Troubleshooting
Connection Refused
If you see connection errors, verify:- The collector is running:
docker ps - The endpoint is correct (default:
http://localhost:4318for HTTP) - Firewall rules allow the connection
Spans Not Appearing
Ensure you’re calling shutdown:High Memory Usage
Reduce batch queue size:Protocol Reference
Available protocol options:Next Steps
Stdout Exporter
Debug telemetry locally without a backend
Zipkin Exporter
Export traces to Zipkin