Skip to main content
This guide walks you through creating a simple Rust application that emits traces, metrics, and logs using OpenTelemetry. We’ll use the stdout exporter so you can see the telemetry data directly in your console.

Create a new project

1

Create a new Rust project

cargo new otel-quickstart
cd otel-quickstart
2

Add dependencies

Add the following to your Cargo.toml:
Cargo.toml
[package]
name = "otel-quickstart"
version = "0.1.0"
edition = "2021"

[dependencies]
opentelemetry = "0.31.0"
opentelemetry_sdk = "0.31.0"
opentelemetry-stdout = "0.31.0"
opentelemetry-appender-tracing = "0.31.0"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["registry", "std"] }
tokio = { version = "1.0", features = ["full"] }
once_cell = "1.0"
3

Write the code

Replace the contents of src/main.rs with the complete example below.
4

Run the application

cargo run
You’ll see traces, metrics, and logs printed to stdout in JSON format.

Complete example

This example demonstrates all three telemetry signals:
src/main.rs
use once_cell::sync::Lazy;
use opentelemetry::{global, KeyValue};
use opentelemetry::trace::Tracer;
use opentelemetry_sdk::metrics::SdkMeterProvider;
use opentelemetry_sdk::trace::SdkTracerProvider;
use opentelemetry_sdk::Resource;

// Define a shared resource that identifies your service
static RESOURCE: Lazy<Resource> = Lazy::new(|| {
    Resource::builder()
        .with_service_name("quickstart-service")
        .build()
});

// Initialize the tracer provider with stdout exporter
fn init_traces() -> SdkTracerProvider {
    let exporter = opentelemetry_stdout::SpanExporter::default();
    let provider = SdkTracerProvider::builder()
        .with_simple_exporter(exporter)
        .with_resource(RESOURCE.clone())
        .build();
    global::set_tracer_provider(provider.clone());
    provider
}

// Initialize the meter provider with stdout exporter
fn init_metrics() -> SdkMeterProvider {
    let exporter = opentelemetry_stdout::MetricExporter::default();
    let provider = SdkMeterProvider::builder()
        .with_periodic_exporter(exporter)
        .with_resource(RESOURCE.clone())
        .build();
    global::set_meter_provider(provider.clone());
    provider
}

// Initialize the logger provider with stdout exporter
fn init_logs() -> opentelemetry_sdk::logs::SdkLoggerProvider {
    use opentelemetry_appender_tracing::layer;
    use opentelemetry_sdk::logs::SdkLoggerProvider;
    use tracing_subscriber::prelude::*;

    let exporter = opentelemetry_stdout::LogExporter::default();
    let provider = SdkLoggerProvider::builder()
        .with_simple_exporter(exporter)
        .with_resource(RESOURCE.clone())
        .build();
    
    // Create a tracing bridge to route tracing logs to OpenTelemetry
    let layer = layer::OpenTelemetryTracingBridge::new(&provider);
    tracing_subscriber::registry().with(layer).init();
    
    provider
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Initialize all telemetry providers
    let tracer_provider = init_traces();
    let meter_provider = init_metrics();
    let logger_provider = init_logs();

    // Get a tracer for creating spans
    let tracer = global::tracer("quickstart");
    
    // Get a meter for recording metrics
    let meter = global::meter("quickstart");
    let counter = meter.u64_counter("requests").build();
    let histogram = meter.f64_histogram("request_duration").build();

    // Create a span to represent some work
    tracer.in_span("process_request", |cx| {
        let span = cx.span();
        
        // Add attributes to the span
        span.set_attribute(KeyValue::new("http.method", "GET"));
        span.set_attribute(KeyValue::new("http.route", "/api/users"));
        
        // Add an event to the span
        span.add_event(
            "Processing request",
            vec![KeyValue::new("user_id", "12345")],
        );
        
        // Emit a log within the span context
        tracing::info!(
            user_id = "12345",
            action = "fetch_user",
            "User data retrieved successfully"
        );
        
        // Record metrics
        counter.add(
            1,
            &[
                KeyValue::new("method", "GET"),
                KeyValue::new("status", "200"),
            ],
        );
        
        histogram.record(
            0.245,
            &[
                KeyValue::new("method", "GET"),
                KeyValue::new("route", "/api/users"),
            ],
        );
        
        // Simulate nested work with a child span
        tracer.in_span("database_query", |cx| {
            let span = cx.span();
            span.set_attribute(KeyValue::new("db.system", "postgresql"));
            span.set_attribute(KeyValue::new("db.statement", "SELECT * FROM users WHERE id = $1"));
            
            tracing::debug!("Executing database query");
        });
    });

    // Shutdown providers to flush any pending telemetry
    tracer_provider.shutdown()?;
    meter_provider.shutdown()?;
    logger_provider.shutdown()?;

    Ok(())
}

Understanding the example

Let’s break down the key components:

Resource

A resource identifies your service and provides context for all telemetry:
static RESOURCE: Lazy<Resource> = Lazy::new(|| {
    Resource::builder()
        .with_service_name("quickstart-service")
        .build()
});

Providers

Providers manage the lifecycle of telemetry pipelines:
  • TracerProvider - Creates tracers and manages span exporters
  • MeterProvider - Creates meters and manages metric exporters
  • LoggerProvider - Manages log exporters and bridges to logging frameworks
Always call shutdown() on providers when your application exits to ensure all telemetry is flushed.

Creating spans

Spans represent units of work in distributed tracing:
tracer.in_span("process_request", |cx| {
    let span = cx.span();
    span.set_attribute(KeyValue::new("http.method", "GET"));
    span.add_event("Processing request", vec![]);
    // ... work happens here
});

Recording metrics

Metrics track measurements over time:
let counter = meter.u64_counter("requests").build();
counter.add(1, &[KeyValue::new("method", "GET")]);

let histogram = meter.f64_histogram("request_duration").build();
histogram.record(0.245, &[KeyValue::new("route", "/api/users")]);

Emitting logs

Logs provide structured event data:
tracing::info!(
    user_id = "12345",
    action = "fetch_user",
    "User data retrieved successfully"
);

Metrics example

Here’s a focused example showing different metric instrument types:
src/main.rs
use opentelemetry::{global, KeyValue};
use opentelemetry_sdk::metrics::SdkMeterProvider;
use opentelemetry_sdk::Resource;

fn init_metrics() -> SdkMeterProvider {
    let exporter = opentelemetry_stdout::MetricExporter::default();
    let provider = SdkMeterProvider::builder()
        .with_periodic_exporter(exporter)
        .with_resource(
            Resource::builder()
                .with_service_name("metrics-example")
                .build(),
        )
        .build();
    global::set_meter_provider(provider.clone());
    provider
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let meter_provider = init_metrics();
    let meter = global::meter("example");

    // Counter - monotonically increasing value
    let counter = meter.u64_counter("items_processed").build();
    counter.add(
        10,
        &[
            KeyValue::new("worker", "worker-1"),
            KeyValue::new("status", "success"),
        ],
    );

    // UpDownCounter - value that can increase or decrease
    let updown_counter = meter.i64_up_down_counter("active_connections").build();
    updown_counter.add(5, &[KeyValue::new("pool", "db-pool")]);
    updown_counter.add(-2, &[KeyValue::new("pool", "db-pool")]);

    // Histogram - statistical distribution of values
    let histogram = meter
        .f64_histogram("request_duration")
        .with_description("HTTP request duration in seconds")
        .with_boundaries(vec![0.0, 0.5, 1.0, 2.5, 5.0, 10.0])
        .build();
    histogram.record(
        1.23,
        &[
            KeyValue::new("endpoint", "/api/users"),
            KeyValue::new("method", "GET"),
        ],
    );

    // Gauge - current value snapshot
    let gauge = meter
        .f64_gauge("cpu_usage")
        .with_description("Current CPU usage percentage")
        .build();
    gauge.record(42.5, &[KeyValue::new("core", "0")]);

    // Observable Counter - asynchronous observation
    let _observable_counter = meter
        .u64_observable_counter("total_bytes_allocated")
        .with_callback(|observer| {
            // This is called periodically to observe the current value
            observer.observe(1024 * 1024, &[KeyValue::new("heap", "main")]);
        })
        .build();

    meter_provider.shutdown()?;
    Ok(())
}

Using OTLP exporter

To send telemetry to an OpenTelemetry Collector or OTLP-compatible backend:
1

Update dependencies

Cargo.toml
[dependencies]
opentelemetry = "0.31.0"
opentelemetry_sdk = "0.31.0"
opentelemetry-otlp = { version = "0.31.0", features = ["http-proto"] }
opentelemetry-appender-tracing = "0.31.0"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["registry", "std"] }
tokio = { version = "1.0", features = ["full"] }
2

Replace exporter initialization

use opentelemetry_otlp::{SpanExporter, MetricExporter, LogExporter, Protocol};

fn init_traces() -> SdkTracerProvider {
    let exporter = SpanExporter::builder()
        .with_http()
        .with_protocol(Protocol::HttpBinary)
        .build()
        .expect("Failed to create trace exporter");
    
    SdkTracerProvider::builder()
        .with_batch_exporter(exporter)
        .with_resource(get_resource())
        .build()
}

fn init_metrics() -> SdkMeterProvider {
    let exporter = MetricExporter::builder()
        .with_http()
        .with_protocol(Protocol::HttpBinary)
        .build()
        .expect("Failed to create metric exporter");
    
    SdkMeterProvider::builder()
        .with_periodic_exporter(exporter)
        .with_resource(get_resource())
        .build()
}
3

Configure endpoint (optional)

By default, the OTLP exporter sends to http://localhost:4318. Configure a different endpoint:
use opentelemetry_otlp::WithExportConfig;

let exporter = SpanExporter::builder()
    .with_http()
    .with_endpoint("http://your-collector:4318")
    .build()
    .expect("Failed to create trace exporter");
Use with_batch_exporter() for production to export telemetry in batches, reducing overhead compared to with_simple_exporter().

Environment variables

Configure OTLP exporters using environment variables:
# Endpoint for all signals
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318

# Signal-specific endpoints
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4318/v1/traces
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=http://localhost:4318/v1/metrics
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=http://localhost:4318/v1/logs

# Headers (e.g., for authentication)
export OTEL_EXPORTER_OTLP_HEADERS="api-key=your-key"

# Protocol (http/protobuf or http/json)
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf

Common patterns

Error handling

if let Err(e) = tracer_provider.shutdown() {
    eprintln!("Failed to shutdown tracer provider: {}", e);
}

Context propagation

use opentelemetry::trace::TraceContextExt;

tracer.in_span("outer", |cx| {
    // Context is automatically propagated to child spans
    let trace_id = cx.span().span_context().trace_id();
    println!("Trace ID: {}", trace_id);
    
    tracer.in_span("inner", |_cx| {
        // This span is a child of "outer"
    });
});

Instrumentation scope

use opentelemetry::InstrumentationScope;

let scope = InstrumentationScope::builder("my-library")
    .with_version("1.0.0")
    .with_attributes([KeyValue::new("environment", "production")])
    .build();

let tracer = global::tracer_with_scope(scope);

Next steps

API Reference

Explore the complete API documentation

Examples

Browse more examples in the GitHub repository

OTLP Configuration

Learn about OTLP exporter configuration

Semantic Conventions

Use standard attribute names and values

Build docs developers (and LLMs) love