Skip to main content
Sinks are the output components in Vector. They send processed events to various destinations like databases, cloud services, files, and more.

Sink Configuration

Sinks are configured in the sinks section:
sinks:
  <sink_id>:
    type: <sink_type>
    inputs: [<source_or_transform_ids>]
    # Sink-specific options

Common Sink Parameters

All sinks support these base configuration options:
type
string
required
The type of the sink component.
inputs
array
required
Array of source or transform IDs to receive events from.
healthcheck
object
Healthcheck configuration for the sink.
healthcheck:
  enabled: true
  uri: "http://localhost:9200/_cluster/health"
buffer
object
Buffering configuration for the sink.
buffer:
  type: memory
  max_events: 500
  when_full: block
acknowledgements
boolean
default:"false"
Enable end-to-end acknowledgements for this sink.

Console Sink

Output events to stdout/stderr:
sinks:
  console_output:
    type: console
    inputs: [parse_logs]
    encoding:
      codec: json
      json:
        pretty: true
target
string
default:"stdout"
Output target: stdout or stderr.
encoding
object
required
How to encode events for output.
encoding.codec
string
required
Encoding format: json, text, logfmt, or csv.

File Sink

Write events to files:
sinks:
  file_output:
    type: file
    inputs: [parse_logs]
    path: "/var/log/vector/output-%Y-%m-%d.log"
    encoding:
      codec: json
    compression: gzip
path
string
required
File path template. Supports strftime specifiers for time-based rotation.
compression
string
default:"none"
Compression algorithm: none, gzip, or zstd.
idle_timeout_secs
integer
default:"30"
Close idle files after this many seconds.

Elasticsearch Sink

Send events to Elasticsearch:
sinks:
  elasticsearch:
    type: elasticsearch
    inputs: [parse_logs]
    endpoint: "http://localhost:9200"
    bulk:
      index: "vector-%Y-%m-%d"
      action: index
    auth:
      strategy: basic
      user: "elastic"
      password: "${ELASTICSEARCH_PASSWORD}"
endpoint
string
required
Elasticsearch endpoint URL.
bulk.index
string
required
Index name template. Supports strftime specifiers.
bulk.action
string
default:"index"
Bulk operation type: index, create, or update.
auth
object
Authentication configuration for Elasticsearch.

AWS S3 Sink

Upload events to Amazon S3:
sinks:
  s3_archive:
    type: aws_s3
    inputs: [parse_logs]
    region: us-east-1
    bucket: log-archives
    key_prefix: "date=%Y-%m-%d/"
    compression: gzip
    encoding:
      codec: json
    batch:
      max_bytes: 10000000  # 10MB
      timeout_secs: 300
region
string
required
AWS region where the bucket is located.
bucket
string
required
S3 bucket name.
key_prefix
string
Object key prefix. Supports strftime specifiers for partitioning.
compression
string
default:"gzip"
Compression algorithm: none, gzip, or zstd.
batch
object
Batching configuration for S3 uploads.

HTTP Sink

Send events to HTTP endpoints:
sinks:
  http_endpoint:
    type: http
    inputs: [parse_logs]
    uri: "https://api.example.com/logs"
    method: post
    encoding:
      codec: json
    auth:
      strategy: bearer
      token: "${API_TOKEN}"
uri
string
required
HTTP endpoint URL.
method
string
default:"post"
HTTP method: post, put, or patch.
headers
object
Custom HTTP headers to include.
headers:
  X-Custom-Header: "value"
  Content-Type: "application/json"

Kafka Sink

Produce events to Apache Kafka:
sinks:
  kafka:
    type: kafka
    inputs: [parse_logs]
    bootstrap_servers: "localhost:9092"
    topic: logs
    key_field: host
    encoding:
      codec: json
bootstrap_servers
string
required
Comma-separated list of Kafka bootstrap servers.
topic
string
required
Kafka topic to produce messages to.
key_field
string
Event field to use as the Kafka message key.

Prometheus Exporter Sink

Expose metrics for Prometheus scraping:
sinks:
  prometheus_exporter:
    type: prometheus_exporter
    inputs: [app_metrics]
    address: "0.0.0.0:9598"
    default_namespace: vector
address
string
required
Socket address to listen on for Prometheus scrapes.
default_namespace
string
default:"vector"
Default namespace for metrics without one.

Loki Sink

Send logs to Grafana Loki:
sinks:
  loki:
    type: loki
    inputs: [parse_logs]
    endpoint: "http://localhost:3100"
    labels:
      environment: production
      host: "{{ host }}"
    encoding:
      codec: json
endpoint
string
required
Loki endpoint URL.
labels
object
required
Label set to apply to all logs. Supports template syntax.

Datadog Logs Sink

Send logs to Datadog:
sinks:
  datadog_logs:
    type: datadog_logs
    inputs: [parse_logs]
    default_api_key: "${DATADOG_API_KEY}"
    site: datadoghq.com
default_api_key
string
required
Datadog API key for authentication.
site
string
default:"datadoghq.com"
Datadog site: datadoghq.com, datadoghq.eu, etc.

Multiple Sinks Example

Send data to multiple destinations:
sources:
  app_logs:
    type: file
    include: ["/var/log/app/*.log"]

transforms:
  parse:
    type: remap
    inputs: [app_logs]
    source: |
      . = parse_json!(.message)
  
  route:
    type: route
    inputs: [parse]
    route:
      errors: '.level == "error"'
      metrics: '.type == "metric"'
      other: 'true'

sinks:
  # Error logs to Elasticsearch for analysis
  elasticsearch_errors:
    type: elasticsearch
    inputs: [route.errors]
    endpoint: "http://localhost:9200"
    bulk:
      index: "errors-%Y-%m-%d"
  
  # All logs to S3 for long-term storage
  s3_archive:
    type: aws_s3
    inputs: [parse]
    region: us-east-1
    bucket: log-archives
    key_prefix: "date=%Y-%m-%d/"
    compression: gzip
  
  # Metrics to Prometheus
  prometheus:
    type: prometheus_exporter
    inputs: [route.metrics]
    address: "0.0.0.0:9598"
  
  # Debug output to console
  console:
    type: console
    inputs: [route.other]
    encoding:
      codec: json

Buffering

Configure how sinks buffer events:
sinks:
  elasticsearch:
    type: elasticsearch
    inputs: [logs]
    endpoint: "http://localhost:9200"
    buffer:
      type: disk
      max_size: 268435488  # 256 MB
      when_full: block
buffer.type
string
default:"memory"
Buffer type: memory or disk.
buffer.max_events
integer
default:"500"
Maximum number of events to buffer (memory buffer only).
buffer.max_size
integer
Maximum buffer size in bytes (disk buffer only).
buffer.when_full
string
default:"block"
Behavior when buffer is full: block or drop_newest.

Batching

Configure how sinks batch events:
sinks:
  http_endpoint:
    type: http
    inputs: [logs]
    uri: "https://api.example.com/logs"
    batch:
      max_events: 100
      max_bytes: 1048576  # 1 MB
      timeout_secs: 10
batch.max_events
integer
Maximum number of events per batch.
batch.max_bytes
integer
Maximum batch size in bytes.
batch.timeout_secs
integer
Maximum time to wait before sending a partial batch.

Next Steps

Build docs developers (and LLMs) love