Skip to main content

Overview

This quickstart guide will walk you through setting up Snuba locally and executing your first queries. You’ll learn how to start the services, run migrations, and query data using SnQL.
This guide assumes you have Docker and Docker Compose installed. For other installation methods, see the Installation page.

Quick Start with Devservices

The fastest way to get started with Snuba is using the devservices setup, which automatically manages all dependencies.
1

Clone the Repository

First, clone the Snuba repository:
git clone https://github.com/getsentry/snuba.git
cd snuba
2

Install Dependencies

Install devservices and Python dependencies:
# Install devservices (requires Python 3.13+)
pip install devservices

# Install Snuba dependencies
pip install -e .
3

Start Services

Use devservices to start ClickHouse, Kafka, and Redis:
devservices up clickhouse kafka redis
This will start all required services in Docker containers:
  • ClickHouse on ports 9000 (native) and 8123 (HTTP)
  • Kafka on port 9093
  • Redis on port 6379
4

Bootstrap and Migrate

Initialize the database and run migrations:
# Bootstrap creates necessary Kafka topics and Redis keys
snuba bootstrap --force

# Run database migrations to create ClickHouse tables
snuba migrations migrate --force
The --force flag skips confirmation prompts. Remove it if you want to review changes before applying them.
5

Start the Devserver

Start all Snuba processes (API, admin, consumers):
snuba devserver
This starts:
  • API server on http://localhost:1218
  • Admin UI on http://localhost:1219
  • Multiple consumers for different data types (errors, transactions, metrics, etc.)
  • Subscription schedulers and executors
The devserver starts many consumer processes. If you only need the API, use:
snuba devserver --no-workers

Your First Query

Now that Snuba is running, let’s execute some queries using SnQL (Snuba Query Language).

Using the API Directly

Snuba exposes an HTTP API for executing queries. Here’s how to query the events dataset:
curl -X POST http://localhost:1218/events/snql \
  -H 'Content-Type: application/json' \
  -d '{
    "query": "MATCH (events) SELECT count() AS event_count WHERE timestamp >= toDateTime('"'2024-01-01T00:00:00'"') AND timestamp < toDateTime('"'2024-01-02T00:00:00'"') AND project_id = 1",
    "dataset": "events"
  }'

Understanding the Query

Let’s break down the SnQL query structure:
MATCH (events)              -- Specify the entity to query
SELECT                      -- Select clause with expressions
  count() AS event_count,   -- Aggregation function
  platform                  -- Column to select
BY platform                 -- Group by clause
WHERE                       -- Filter conditions
  timestamp >= toDateTime('2024-01-01T00:00:00')
  AND timestamp < toDateTime('2024-01-02T00:00:00')
  AND project_id = 1
ORDER BY event_count DESC   -- Sort results
LIMIT 10                    -- Limit number of results
Key Components:
  • MATCH (entity) - Specifies which entity to query (events, transactions, metrics_counters, etc.)
  • SELECT - Columns and aggregations to return
  • BY - Group by clause (similar to SQL’s GROUP BY)
  • WHERE - Filter conditions (time range and project are typically required)
  • ORDER BY - Sort order
  • LIMIT - Maximum number of rows to return

Sample Query Results

A successful query returns a JSON response with data and metadata:
{
  "data": [
    {
      "event_count": 1523,
      "platform": "python"
    },
    {
      "event_count": 892,
      "platform": "javascript"
    },
    {
      "event_count": 445,
      "platform": "java"
    }
  ],
  "meta": [
    {"name": "event_count", "type": "UInt64"},
    {"name": "platform", "type": "String"}
  ],
  "timing": {
    "timestamp": 1704096000,
    "duration_ms": 42
  }
}

Common Query Patterns

Aggregations

SnQL supports various aggregation functions:
MATCH (events)
SELECT
  count() AS total_events,
  uniq(user) AS unique_users,
  avg(duration) AS avg_duration,
  max(timestamp) AS latest_event
WHERE timestamp >= toDateTime('2024-01-01T00:00:00')
  AND timestamp < toDateTime('2024-01-02T00:00:00')
  AND project_id = 1

Time Series with Granularity

Group data by time buckets:
MATCH (events)
SELECT
  toStartOfHour(timestamp) AS time,
  count() AS event_count
BY time
WHERE timestamp >= toDateTime('2024-01-01T00:00:00')
  AND timestamp < toDateTime('2024-01-02T00:00:00')
  AND project_id = 1
ORDER BY time ASC
GRANULARITY 3600
GRANULARITY specifies the time bucket size in seconds. Common values:
  • 60 - 1 minute
  • 3600 - 1 hour
  • 86400 - 1 day

Tag Filtering

Query tags using subscript notation:
MATCH (events)
SELECT count() AS error_count
WHERE timestamp >= toDateTime('2024-01-01T00:00:00')
  AND timestamp < toDateTime('2024-01-02T00:00:00')
  AND project_id = 1
  AND tags[environment] = 'production'
  AND tags[level] IN tuple('error', 'fatal')

Complex Conditions

Use functions and complex expressions:
MATCH (transactions)
SELECT
  transaction_name,
  quantile(0.95)(duration) AS p95_duration,
  count() AS transaction_count
BY transaction_name
WHERE timestamp >= toDateTime('2024-01-01T00:00:00')
  AND timestamp < toDateTime('2024-01-02T00:00:00')
  AND project_id = 1
  AND duration > 1000
HAVING transaction_count > 100
ORDER BY p95_duration DESC
LIMIT 20

Using the Admin UI

Snuba includes a web-based admin interface for managing the system:
1

Access Admin UI

Open your browser to http://localhost:1219
2

View System Health

Check the health of ClickHouse connections and Kafka consumers
3

Run Migrations

View migration status and run pending migrations from the UI
4

Execute Test Queries

Use the query editor to test SnQL queries interactively

Available Entities

Snuba provides multiple entities you can query. Common ones include:
EntityDatasetDescription
eventseventsError and issue events
transactionstransactionsPerformance transaction data
metrics_countersmetricsCounter metrics
metrics_setsmetricsSet metrics
generic_metrics_distributionsmetricsDistribution metrics
profilesprofilesProfiling data
replaysreplaysSession replay data
search_issuessearch_issuesIssue search data
eap_itemsevents_analytics_platformEAP items

Troubleshooting

Services Won’t Start

If ports 1218, 1219, 9000, 8123, or 6379 are already in use:
# Find process using port
lsof -i :1218

# Kill the process or stop conflicting services
docker ps
docker stop <container-id>
Ensure ClickHouse is running and healthy:
# Check ClickHouse status
curl http://localhost:8123/ping

# Should return "Ok."
If migrations fail, you may need to reset:
# Drop all ClickHouse tables (destructive!)
snuba migrations migrate --force --reverse

# Re-run migrations
snuba migrations migrate --force

Query Errors

Ensure you’re querying an entity that exists and using the correct dataset endpoint.Available endpoints:
  • /events/snql - Events dataset
  • /transactions/snql - Transactions dataset
  • /metrics/snql - Metrics dataset
Most queries require a time range filter:
WHERE timestamp >= toDateTime('...')
  AND timestamp < toDateTime('...')
Most entities require filtering by project_id:
WHERE project_id = 1

Next Steps

Now that you have Snuba running:

Learn More About Installation

Explore different installation methods including Docker Compose and production deployments

Query Language Reference

Deep dive into SnQL syntax, functions, and advanced query patterns

Configuration

Learn about configuring datasets, entities, and storages

API Reference

Complete API documentation for all endpoints and parameters

Development Workflow

For development, you can customize the devserver behavior:
# Start only the API (no consumers)
snuba devserver --no-workers

# Skip bootstrap on start
snuba devserver --no-bootstrap

# Set custom log level
snuba devserver --log-level=debug
You can also start individual components:
# Start only the API server
snuba api --bind 127.0.0.1:1218

# Start only the admin UI
snuba admin

# Start a specific consumer
snuba rust-consumer \
  --storage=errors \
  --consumer-group=errors_group \
  --auto-offset-reset=latest
For more details on the CLI commands, run snuba --help or snuba <command> --help.

Build docs developers (and LLMs) love