FullStackHero provides comprehensive observability using OpenTelemetry , enabling you to monitor, trace, and debug your application in production. The system collects traces, metrics, and logs and exports them to your observability backend.
Overview
The observability system provides:
Distributed Tracing : Track requests across services and modules
Metrics Collection : Monitor performance, resource usage, and custom metrics
Structured Logging : Centralized logs with correlation IDs
OTLP Export : Send telemetry to Jaeger, Grafana, DataDog, or any OTLP-compatible backend
Automatic Instrumentation : Built-in instrumentation for HTTP, EF Core, Redis, and more
Configuration
Configure OpenTelemetry in appsettings.json:
{
"OpenTelemetryOptions" : {
"Enabled" : true ,
"Tracing" : {
"Enabled" : true
},
"Metrics" : {
"Enabled" : true ,
"MeterNames" : [
"FSH.Modules.Identity" ,
"FSH.Modules.Multitenancy" ,
"FSH.Modules.Auditing"
]
},
"Exporter" : {
"Otlp" : {
"Enabled" : true ,
"Endpoint" : "http://localhost:4317" ,
"Protocol" : "grpc"
}
},
"Jobs" : { "Enabled" : true },
"Mediator" : { "Enabled" : true },
"Http" : {
"Histograms" : {
"Enabled" : true
}
},
"Data" : {
"FilterEfStatements" : true ,
"FilterRedisCommands" : true
}
}
}
OpenTelemetryOptions
Enable or disable OpenTelemetry globally.
Enable distributed tracing.
Enable metrics collection.
List of custom meter names to export. Each module can define its own meter.
Enable OTLP (OpenTelemetry Protocol) exporter.
OTLP collector endpoint. Typically http://localhost:4317 for gRPC.
Protocol to use: grpc or http/protobuf.
Enable tracing for Hangfire background jobs.
Enable tracing for Mediator commands and queries.
Collect HTTP request duration histograms.
Filter out verbose Entity Framework SQL statements from traces.
Filter out verbose Redis commands from traces.
Distributed Tracing
Automatic Instrumentation
OpenTelemetry automatically traces:
ASP.NET Core HTTP requests
HTTP client requests (outgoing API calls)
Entity Framework Core queries
PostgreSQL/MSSQL database operations (via Npgsql)
Redis operations
Mediator commands and queries (when Mediator.Enabled is true)
Hangfire background jobs (when Jobs.Enabled is true)
Tracing Configuration
tracing
. SetResourceBuilder ( resourceBuilder )
. AddAspNetCoreInstrumentation ( instrumentation =>
{
instrumentation . Filter = context =>
! IsHealthCheck ( context . Request . Path );
instrumentation . EnrichWithHttpRequest = EnrichWithHttpRequest ;
instrumentation . EnrichWithHttpResponse = EnrichWithHttpResponse ;
})
. AddHttpClientInstrumentation ()
. AddNpgsql ()
. AddEntityFrameworkCoreInstrumentation ()
. AddRedisInstrumentation ( redis =>
{
if ( options . Data . FilterRedisCommands )
{
redis . SetVerboseDatabaseStatements = false ;
}
})
. AddSource ( builder . Environment . ApplicationName )
. AddSource ( "FSH.Hangfire" );
if ( options . Exporter . Otlp . Enabled )
{
tracing . AddOtlpExporter ( otlp =>
{
ConfigureOtlpExporter ( options . Exporter . Otlp , otlp );
});
}
Custom Spans
Create custom spans for your application logic:
public class MyService
{
private readonly ActivitySource _activitySource ;
public MyService ( ActivitySource activitySource )
{
_activitySource = activitySource ;
}
public async Task ProcessOrderAsync ( Guid orderId , CancellationToken ct )
{
using var activity = _activitySource . StartActivity (
"ProcessOrder" ,
ActivityKind . Internal );
activity ? . SetTag ( "order.id" , orderId );
activity ? . SetTag ( "order.status" , "processing" );
try
{
// Your business logic here
await Task . Delay ( 100 , ct );
activity ? . SetTag ( "order.status" , "completed" );
}
catch ( Exception ex )
{
activity ? . SetStatus ( ActivityStatusCode . Error , ex . Message );
throw ;
}
}
}
The MediatorTracingBehavior automatically creates spans for all commands and queries:
public class MediatorTracingBehavior < TRequest , TResponse >
: IPipelineBehavior < TRequest , TResponse >
where TRequest : IRequest < TResponse >
{
private readonly ActivitySource _activitySource ;
public async ValueTask < TResponse > Handle (
TRequest message ,
CancellationToken cancellationToken ,
MessageHandlerDelegate < TRequest , TResponse > next )
{
var requestName = typeof ( TRequest ). Name ;
using var activity = _activitySource . StartActivity (
$"Mediator: { requestName } " ,
ActivityKind . Internal );
activity ? . SetTag ( "mediator.request_type" , requestName );
try
{
var response = await next ( message , cancellationToken );
activity ? . SetStatus ( ActivityStatusCode . Ok );
return response ;
}
catch ( Exception ex )
{
activity ? . SetStatus ( ActivityStatusCode . Error , ex . Message );
throw ;
}
}
}
Metrics
Automatic Metrics
OpenTelemetry collects metrics for:
HTTP requests : Request count, duration, status codes
HTTP client requests : Outgoing API call metrics
Database queries : Query count, duration
Runtime metrics : CPU, memory, GC, thread pool
Metrics Configuration
metrics
. SetResourceBuilder ( resourceBuilder )
. AddAspNetCoreInstrumentation ()
. AddHttpClientInstrumentation ()
. AddNpgsqlInstrumentation ()
. AddRuntimeInstrumentation ();
// Apply histogram buckets for HTTP server duration
if ( options . Http . Histograms . Enabled )
{
metrics . AddView (
"http.server.duration" ,
new ExplicitBucketHistogramConfiguration
{
Boundaries = GetHistogramBuckets ( options )
});
}
foreach ( var meterName in options . Metrics . MeterNames ?? Array . Empty < string >())
{
metrics . AddMeter ( meterName );
}
Custom Metrics
Define custom metrics using Meter:
public class IdentityMetrics
{
private readonly Meter _meter ;
private readonly Counter < long > _tokenGeneratedCounter ;
private readonly Counter < long > _loginFailedCounter ;
public IdentityMetrics ()
{
_meter = new Meter ( "FSH.Modules.Identity" );
_tokenGeneratedCounter = _meter . CreateCounter < long >(
"identity.token_generated" ,
description : "Number of tokens generated" );
_loginFailedCounter = _meter . CreateCounter < long >(
"identity.login_failed" ,
description : "Number of failed login attempts" );
}
public void TokenGenerated ( string userEmail )
{
_tokenGeneratedCounter . Add ( 1 ,
new KeyValuePair < string , object ?>( "user.email" , userEmail ));
}
public void LoginFailed ( string reason )
{
_loginFailedCounter . Add ( 1 ,
new KeyValuePair < string , object ?>( "failure.reason" , reason ));
}
}
Register the metrics class:
services . AddSingleton < IdentityMetrics >();
Use it in your handlers:
public class TokenService
{
private readonly IdentityMetrics _metrics ;
public async Task < TokenResponse > IssueAsync (...)
{
// Issue token logic
_metrics . TokenGenerated ( userEmail );
return response ;
}
}
Structured Logging
FullStackHero uses Serilog with OpenTelemetry integration:
{
"Serilog" : {
"Using" : [
"Serilog.Sinks.Console" ,
"Serilog.Sinks.OpenTelemetry"
],
"Enrich" : [
"FromLogContext" ,
"WithMachineName" ,
"WithThreadId" ,
"WithCorrelationId" ,
"WithProcessId" ,
"WithProcessName"
],
"MinimumLevel" : {
"Default" : "Debug"
},
"WriteTo" : [
{
"Name" : "Console" ,
"Args" : {
"restrictedToMinimumLevel" : "Information"
}
},
{
"Name" : "OpenTelemetry" ,
"Args" : {
"endpoint" : "http://localhost:4317" ,
"protocol" : "grpc" ,
"resourceAttributes" : {
"service.name" : "Playground.Api"
}
}
}
]
}
}
Correlation IDs
Logs are automatically enriched with correlation IDs for tracing:
_logger . LogInformation (
"Processing order {OrderId} for user {UserId}" ,
orderId ,
userId );
Output:
{
"@t" : "2026-03-06T22:30:00.000Z" ,
"@mt" : "Processing order {OrderId} for user {UserId}" ,
"OrderId" : "123e4567-e89b-12d3-a456-426614174000" ,
"UserId" : "987fcdeb-51a2-43f1-b456-789012345678" ,
"CorrelationId" : "abc-123-def-456" ,
"TraceId" : "4bf92f3577b34da6a3ce929d0e0e4736" ,
"SpanId" : "00f067aa0ba902b7"
}
Observability Backends
Run Jaeger locally:
docker run -d --name jaeger \
-p 16686:16686 \
-p 4317:4317 \
jaegertracing/all-in-one:latest
Access the Jaeger UI at http://localhost:16686.
Grafana + Tempo + Loki
Run the full observability stack:
version : '3.8'
services :
tempo :
image : grafana/tempo:latest
command : [ "-config.file=/etc/tempo.yaml" ]
volumes :
- ./tempo.yaml:/etc/tempo.yaml
ports :
- "4317:4317" # OTLP gRPC
- "3200:3200" # Tempo HTTP
loki :
image : grafana/loki:latest
ports :
- "3100:3100"
grafana :
image : grafana/grafana:latest
ports :
- "3000:3000"
environment :
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
Access Grafana at http://localhost:3000.
Viewing Telemetry
View distributed traces in Jaeger or Grafana:
Navigate to the tracing UI
Search for traces by service name, operation, or tags
Inspect individual spans to see timing and attributes
Follow traces across services and modules
Metrics
Query metrics in Grafana:
# HTTP request rate
rate(http_server_request_count[5m])
# Average HTTP response time
avg(http_server_duration_milliseconds)
# Token generation count
identity_token_generated_total
Search and filter logs in Grafana Loki:
{service_name="Playground.Api"} |= "error"
{service_name="Playground.Api"} | json | UserId = "abc-123"
Best Practices
Always include correlation IDs in logs to trace requests across the system.
Tag Spans with Contextual Information
Add tags like tenant.id, user.id, order.id to spans for easier filtering and debugging.
Exclude health checks and other noisy endpoints from tracing to reduce overhead.
Track critical metrics like request rate, error rate, and response time (RED metrics).
Configure alerts for abnormal metrics (e.g., high error rate, slow queries).
Health Checks Monitor application health and readiness
Background Jobs Trace and monitor background job execution
Rate Limiting Monitor rate limit rejections and usage
Authentication Trace authentication flows and token generation