Skip to main content
The Query API allows you to execute queries against Snuba datasets using SnQL or MQL.

Execute Query

curl -X POST http://localhost:1218/query \
  -H "Content-Type: application/json" \
  -d '{
    "dataset": "events",
    "query": "MATCH (events) SELECT event_id, group_id, project_id, timestamp WHERE timestamp >= toDateTime('"'"'2024-01-01T00:00:00'"'"') AND timestamp < toDateTime('"'"'2024-01-02T00:00:00'"'"') AND project_id = 1 LIMIT 100",
    "tenant_ids": {
      "organization_id": 1,
      "referrer": "my-service"
    }
  }'

Request Body

dataset
string
The dataset to query. Optional if querying a storage directly. Examples: events, transactions, metrics
query
string
required
The SnQL query string
tenant_ids
object
required
Tenant identification for attribution and rate limiting
tenant_ids.organization_id
integer
required
Organization ID
tenant_ids.referrer
string
required
Service identifier (e.g., “api”, “issues”, “discover”)
consistent
boolean
default:false
Force consistent reads from first replica
debug
boolean
default:false
Include detailed execution stats in response
dry_run
boolean
default:false
Validate query without executing

Response

data
array
Array of result rows
meta
array
Column metadata with names and types
timing
object
Query timing information
quota_allowance
object
Resource quota and allocation details
stats
object
Execution statistics (included when debug: true or STATS_IN_RESPONSE setting enabled)
sql
string
Generated Clickhouse SQL (included in debug mode)

Success Response Example

{
  "data": [
    {
      "event_id": "abc123",
      "group_id": 456,
      "project_id": 1,
      "timestamp": "2024-01-01T12:34:56"
    },
    {
      "event_id": "def789",
      "group_id": 789,
      "project_id": 1,
      "timestamp": "2024-01-01T12:35:12"
    }
  ],
  "meta": [
    {"name": "event_id", "type": "String"},
    {"name": "group_id", "type": "UInt64"},
    {"name": "project_id", "type": "UInt64"},
    {"name": "timestamp", "type": "DateTime"}
  ],
  "timing": {
    "timestamp": 1704110096,
    "duration_ms": 245
  },
  "quota_allowance": {
    "summary": {
      "is_successful": true,
      "threads_used": 8
    }
  }
}

Dataset-Specific Query

Query a specific dataset using dedicated endpoints:
curl -X POST http://localhost:1218/events/snql \
  -H "Content-Type: application/json" \
  -d '{
    "query": "MATCH (events) SELECT count() WHERE project_id = 1",
    "tenant_ids": {
      "organization_id": 1,
      "referrer": "analytics"
    }
  }'

MQL Query

Execute Metrics Query Language queries:
curl -X POST http://localhost:1218/metrics/mql \
  -H "Content-Type: application/json" \
  -d '{
    "query": "sum(d:transactions/duration@millisecond){status_code:200} by (transaction)",
    "tenant_ids": {
      "organization_id": 1,
      "referrer": "metrics-explorer"
    }
  }'

Request Body

Same as SnQL queries, but with MQL syntax in the query field.

Query Settings

Advanced query configuration:
{
  "query": "...",
  "tenant_ids": {...},
  "turbo": false,
  "consistent": false,
  "debug": false,
  "dry_run": false
}
turbo
boolean
default:false
Enable turbo mode for faster execution (may reduce accuracy)

Query Pipeline

Every query goes through a processing pipeline:

Pipeline Stages

  1. Parse Request - Validate JSON and schema (snuba/web/views.py:234)
  2. Build Request - Parse SnQL/MQL into query AST (snuba/request/validation.py)
  3. Entity Processing - Apply entity-specific processors (snuba/pipeline/stages/query_processing.py:14)
  4. Storage Processing - Apply storage-specific processors (snuba/pipeline/stages/query_processing.py:16)
  5. Allocation Policy - Check quota and rate limits (snuba/web/db_query.py:801)
  6. Execute Query - Run on Clickhouse with caching (snuba/web/db_query.py:160)
  7. Record Metadata - Log query metadata (snuba/querylog/query_metadata.py)

Error Examples

Invalid Query

{
  "error": {
    "type": "invalid_query",
    "message": "Column 'invalid_column' does not exist"
  }
}

Rate Limited

{
  "error": {
    "type": "rate-limited",
    "message": "Query cannot be run due to allocation policies"
  },
  "timing": {
    "timestamp": 1704110096,
    "duration_ms": 12
  },
  "quota_allowance": {
    "summary": {
      "is_rejected": true,
      "rejection_storage_key": "events_storage",
      "quota_used": 95000,
      "quota_unit": "bytes"
    }
  }
}

Clickhouse Error

{
  "error": {
    "type": "clickhouse",
    "message": "Memory limit exceeded",
    "code": 241
  },
  "timing": {...},
  "stats": {...},
  "sql": "SELECT ..."
}

Performance Tips

Always filter by timestamp to leverage time-based partitioning:
WHERE timestamp >= toDateTime('2024-01-01')
  AND timestamp < toDateTime('2024-01-02')
Project ID filtering is highly optimized:
WHERE project_id IN (1, 2, 3)
Use LIMIT to reduce data transfer:
SELECT ... LIMIT 1000
Identical queries are cached. Reuse queries when possible.
Use descriptive, consistent referrer strings for better observability:
{"referrer": "issues.list_view"}

Python Query Builder

Build queries programmatically

Subscriptions

Create recurring queries

Build docs developers (and LLMs) love