Skip to main content
Sinks are the destinations where Vector sends your observability data after collection and transformation. Vector supports a wide range of sinks for logs, metrics, and traces.

What are Sinks?

Sinks are the final component in Vector’s data pipeline. They receive processed events from sources and transforms, then write them to external systems. Each sink is optimized for its specific destination, handling authentication, batching, compression, and retries automatically.

Available Sinks

Cloud Storage

AWS S3

Store observability events in AWS S3 object storage with automatic partitioning and compression.

Azure Blob

Write data to Azure Blob Storage with flexible encoding options.

GCS

Send events to Google Cloud Storage buckets.

Observability Platforms

Datadog

Publish logs, metrics, and traces to Datadog’s observability platform.

Elasticsearch

Index observability events in Elasticsearch with bulk operations and data streams.

Splunk HEC

Send events to Splunk via HTTP Event Collector.

New Relic

Forward logs and metrics to New Relic’s telemetry platform.

Message Queues & Streaming

Kafka

Publish observability events to Apache Kafka topics with high throughput.

AWS Kinesis

Stream data to AWS Kinesis Data Streams or Firehose.

Pulsar

Send events to Apache Pulsar topics.

NATS

Publish messages to NATS messaging system.

Redis

Write events to Redis using lists or pub/sub.

MQTT

Publish messages to MQTT brokers.

AMQP

Send events to AMQP 0.9.1 brokers like RabbitMQ.

Metrics & Monitoring

Prometheus Exporter

Expose metrics on a Prometheus-compatible endpoint for scraping.

InfluxDB

Write metrics to InfluxDB time-series database.

StatsD

Send metrics to StatsD-compatible services.

AWS CloudWatch Metrics

Publish metrics to AWS CloudWatch.

Logs & Analytics

AWS CloudWatch Logs

Send log events to AWS CloudWatch Logs.

Loki

Ship logs to Grafana Loki for log aggregation.

Azure Monitor Logs

Forward logs to Azure Monitor.

Honeycomb

Send structured events to Honeycomb for observability.

Databases

ClickHouse

Write events to ClickHouse columnar database.

PostgreSQL

Insert logs into PostgreSQL tables.

Databend

Send data to Databend cloud data warehouse.

GreptimeDB

Write metrics and logs to GreptimeDB.

HTTP & Generic

HTTP

Send events to any HTTP endpoint with custom encoding.

WebSocket

Stream events over WebSocket connections.

WebHDFS

Write files to Hadoop HDFS via WebHDFS API.

Development & Debugging

Console

Print events to stdout for debugging.

Blackhole

Discard events (useful for testing and benchmarking).

File

Write events to local files.

Other Platforms

Axiom

Send logs and events to Axiom.

AppSignal

Forward metrics to AppSignal.

GCP Chronicle

Send security telemetry to Google Chronicle.

OpenTelemetry

Export traces and metrics in OpenTelemetry format.

Vector

Send events to another Vector instance.

Common Features

All Vector sinks share common capabilities:

Batching

Sinks automatically batch events to optimize throughput and reduce overhead. Configure batch size and timeout to balance latency and efficiency.

Compression

Most sinks support compression (gzip, zstd, snappy) to reduce network bandwidth and storage costs.

Retries

Automatic retry logic with exponential backoff handles transient failures gracefully.

Acknowledgements

End-to-end acknowledgements ensure data delivery guarantees when configured.

Health Checks

Built-in health checks verify sink connectivity at startup.

TLS/SSL

Secure connections with TLS support, including custom CA certificates and client authentication.

Configuration Example

Here’s a basic sink configuration pattern:
[sinks.my_sink]
type = "elasticsearch"  # Sink type
inputs = ["my_transform"]  # Input sources

# Sink-specific configuration
endpoints = ["http://localhost:9200"]

# Common options
[sinks.my_sink.batch]
max_events = 1000
timeout_secs = 5

[sinks.my_sink.encoding]
codec = "json"

compression = "gzip"

Choosing a Sink

When selecting a sink, consider:
  • Data Type: Does the sink support your data type (logs, metrics, traces)?
  • Performance: What throughput and latency do you need?
  • Cost: Consider egress charges, storage costs, and API pricing
  • Integration: Does it integrate with your existing tools?
  • Reliability: What delivery guarantees do you need?

Next Steps

Elasticsearch Sink

Learn how to configure the Elasticsearch sink

S3 Sink

Store data in AWS S3 with automatic partitioning

Kafka Sink

Publish events to Kafka topics

Datadog Sink

Send observability data to Datadog

Build docs developers (and LLMs) love