Time-series metrics including counters, distributions, sets, and gauges with multi-granularity aggregation
The metrics dataset stores pre-aggregated time-series data from Sentry, supporting counters, distributions, sets, and gauges. It’s designed for high-volume, low-latency aggregation queries.
The metrics dataset stores aggregated metrics data with configurable time granularities. Unlike events and transactions which store individual occurrences, metrics store pre-aggregated summaries optimized for time-series analysis.
org_id: UInt64 # Organization IDproject_id: UInt64 # Project ID (required)metric_id: UInt64 # Hashed metric nametimestamp: DateTime # Time bucket (required)bucketed_time: DateTime # Alias for timestampgranularity: UInt32 # Time granularity in secondsuse_case_id: String # Use case identifiertags: Nested( key: UInt64, # Tag key hash value: UInt64 # Tag value hash)value: AggregateFunction(sum, Float64) # Aggregated counter value
MATCH (metrics_distributions)SELECT metric_id, quantile(0.5)(percentiles) as p50, quantile(0.95)(percentiles) as p95, quantile(0.99)(percentiles) as p99WHERE project_id = 1 AND timestamp >= toDateTime('2024-01-01 00:00:00') AND timestamp < toDateTime('2024-01-02 00:00:00')GROUP BY metric_id
MATCH (metrics_sets)SELECT metric_id, uniqMerge(value) as unique_countWHERE project_id = 1 AND timestamp >= toDateTime('2024-01-01 00:00:00') AND timestamp < toDateTime('2024-01-02 00:00:00')GROUP BY metric_id
MATCH (metrics_counters)SELECT sumMerge(value) as totalWHERE project_id = 1 AND timestamp >= toDateTime('2024-01-01 00:00:00') AND timestamp < toDateTime('2024-01-02 00:00:00') AND tags[12345] = 67890 -- environment = production (hashed)
MATCH (metrics_counters)SELECT toStartOfHour(timestamp) as hour, sumMerge(value) as total_requestsWHERE org_id = 1 AND project_id = 2 AND metric_id = 12345 -- requests.count AND timestamp >= toDateTime('2024-01-01 00:00:00') AND timestamp < toDateTime('2024-01-02 00:00:00') AND granularity = 60GROUP BY hourORDER BY hour
MATCH (metrics_distributions)SELECT quantile(0.5)(percentiles) as p50, quantile(0.95)(percentiles) as p95, quantile(0.99)(percentiles) as p99, avg(avg) as averageWHERE org_id = 1 AND project_id = 2 AND metric_id = 67890 -- transaction.duration AND timestamp >= toDateTime('2024-01-01 00:00:00') AND timestamp < toDateTime('2024-01-02 00:00:00') AND granularity = 60
MATCH (metrics_sets)SELECT toStartOfDay(timestamp) as day, uniqMerge(value) as unique_usersWHERE org_id = 1 AND project_id = 2 AND metric_id = 11111 -- users.unique AND timestamp >= toDateTime('2024-01-01 00:00:00') AND timestamp < toDateTime('2024-01-08 00:00:00') AND granularity = 3600GROUP BY dayORDER BY day
MATCH (metrics_counters)SELECT sumMerge(value) as error_countWHERE org_id = 1 AND project_id = 2 AND metric_id = 22222 -- errors.count AND timestamp >= toDateTime('2024-01-01 00:00:00') AND timestamp < toDateTime('2024-01-02 00:00:00') AND tags[33333] = 44444 -- environment = production AND granularity = 60
MATCH (metrics_counters)SELECT toStartOfMinute(timestamp) as minute, sumMerge(value) / 60 as rate_per_secondWHERE org_id = 1 AND project_id = 2 AND metric_id = 55555 AND timestamp >= toDateTime('2024-01-01 12:00:00') AND timestamp < toDateTime('2024-01-01 13:00:00') AND granularity = 10GROUP BY minuteORDER BY minute
Build custom metrics dashboards with business KPIs using all metric types.
-- Business metrics dashboardSELECT sumMerge(revenue) as total_revenue, uniqMerge(customers) as unique_customers, quantile(0.95)(checkout_time) as p95_checkout
Performance Monitoring
Track application performance with distributions for latency percentiles.
-- API endpoint performanceSELECT endpoint, quantile(0.5)(duration) as median, quantile(0.99)(duration) as p99GROUP BY endpoint
Capacity Planning
Use gauges to track resource utilization over time.
-- Memory usage trendsSELECT toStartOfHour(timestamp) as hour, avg(memory_usage) as avg_memory, max(memory_usage) as peak_memory