The Prometheus Exporter sink exposes metrics on an HTTP endpoint that Prometheus can scrape. It aggregates distribution metrics into histograms or summaries and maintains metric state between scrapes.
Configuration
[sinks.prometheus_exporter]
type = "prometheus_exporter"
inputs = ["my_metrics"]
# HTTP server address
address = "0.0.0.0:9598"
# Optional namespace prefix
default_namespace = "service"
# Flush expired metrics interval
flush_period_secs = 60
Core Parameters
address
string
default:"0.0.0.0:9598"
The address to expose for scraping. Metrics are exposed at /metrics.address = "0.0.0.0:9598" # Listen on all interfaces
address = "127.0.0.1:9598" # Localhost only
address = "192.160.0.10:9598" # Specific interface
Default namespace prefix for metrics that don’t have one.The namespace is prepended to metric names with an underscore separator.default_namespace = "service" # metrics become: service_metric_name
default_namespace = "app" # metrics become: app_metric_name
Interval in seconds to flush expired metrics.Metrics not seen since the last flush are considered expired and removed. Set this higher than your Prometheus scrape interval.flush_period_secs = 60 # Flush every minute
flush_period_secs = 300 # Flush every 5 minutes
Suppress timestamps in Prometheus output.Useful when aggregating metrics over long periods or replaying old metrics.suppress_timestamp = true
Distribution Metrics
Vector’s distribution metrics must be aggregated into Prometheus-compatible formats:
Histograms (Default)
distributions_as_summaries
When false (default), distributions are aggregated into histograms.
Bucket boundaries for aggregating distributions into histograms.# Default buckets (suitable for response times in seconds)
buckets = [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0, 10.0]
# Custom buckets for milliseconds
buckets = [1, 5, 10, 25, 50, 100, 250, 500, 1000, 2500, 5000]
# Memory usage in MB
buckets = [10, 50, 100, 250, 500, 1000, 2500, 5000]
Summaries
distributions_as_summaries
When true, distributions are aggregated into summaries.
quantiles
array
default:"[0.5, 0.75, 0.9, 0.95, 0.99]"
Quantiles to calculate for aggregating distributions into summaries.distributions_as_summaries = true
quantiles = [0.5, 0.75, 0.9, 0.95, 0.99, 0.999]
# Use histograms (default, recommended)
[sinks.prom]
type = "prometheus_exporter"
inputs = ["metrics"]
buckets = [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0, 10.0]
# Or use summaries
[sinks.prom_summary]
type = "prometheus_exporter"
inputs = ["metrics"]
distributions_as_summaries = true
quantiles = [0.5, 0.9, 0.95, 0.99]
Authentication
HTTP Basic authentication username.
HTTP Basic authentication password.
[sinks.prometheus.auth]
username = "prometheus"
password = "${PROM_PASSWORD}"
TLS Configuration
Enable TLS/SSL for the HTTP endpoint.
Path to TLS certificate file.
Path to TLS private key file.
[sinks.prometheus.tls]
enabled = true
crt_file = "/etc/vector/certs/server.crt"
key_file = "/etc/vector/certs/server.key"
Metric Types
The Prometheus exporter handles Vector’s metric types:
| Vector Type | Prometheus Type | Description |
|---|
| Counter | Counter | Monotonically increasing value |
| Gauge | Gauge | Value that can increase or decrease |
| Distribution | Histogram or Summary | Statistical distribution of values |
| Set | Gauge | Count of unique values |
Complete Examples
Basic Configuration
[sinks.prometheus]
type = "prometheus_exporter"
inputs = ["host_metrics"]
address = "0.0.0.0:9598"
default_namespace = "vector"
Then configure Prometheus to scrape:
# prometheus.yml
scrape_configs:
- job_name: 'vector'
static_configs:
- targets: ['localhost:9598']
With Authentication
[sinks.prometheus_secure]
type = "prometheus_exporter"
inputs = ["metrics"]
address = "0.0.0.0:9598"
[sinks.prometheus_secure.auth]
username = "prometheus"
password = "${PROMETHEUS_PASSWORD}"
Prometheus configuration:
scrape_configs:
- job_name: 'vector'
static_configs:
- targets: ['vector:9598']
basic_auth:
username: prometheus
password: ${PROMETHEUS_PASSWORD}
With TLS
[sinks.prometheus_tls]
type = "prometheus_exporter"
inputs = ["metrics"]
address = "0.0.0.0:9598"
[sinks.prometheus_tls.tls]
enabled = true
crt_file = "/etc/vector/tls/server.crt"
key_file = "/etc/vector/tls/server.key"
Prometheus configuration:
scrape_configs:
- job_name: 'vector'
scheme: https
static_configs:
- targets: ['vector:9598']
tls_config:
ca_file: /etc/prometheus/ca.crt
insecure_skip_verify: false
Custom Histogram Buckets
[sinks.prometheus_custom]
type = "prometheus_exporter"
inputs = ["response_times"]
address = "0.0.0.0:9598"
default_namespace = "http"
# Buckets optimized for response times in milliseconds
buckets = [1, 5, 10, 25, 50, 100, 250, 500, 1000, 2500, 5000, 10000]
Using Summaries
[sinks.prometheus_summary]
type = "prometheus_exporter"
inputs = ["request_durations"]
address = "0.0.0.0:9598"
# Use summaries instead of histograms
distributions_as_summaries = true
quantiles = [0.5, 0.9, 0.95, 0.99, 0.999]
Multiple Namespaces
# API metrics
[sinks.prom_api]
type = "prometheus_exporter"
inputs = ["api_metrics"]
address = "0.0.0.0:9598"
default_namespace = "api"
# Database metrics
[sinks.prom_db]
type = "prometheus_exporter"
inputs = ["db_metrics"]
address = "0.0.0.0:9599"
default_namespace = "database"
High-Cardinality Handling
[sinks.prometheus_limited]
type = "prometheus_exporter"
inputs = ["filtered_metrics"]
address = "0.0.0.0:9598"
# Flush more frequently to limit memory usage
flush_period_secs = 30
# Use histograms to bound cardinality
buckets = [0.1, 0.5, 1.0, 5.0, 10.0]
Kubernetes Deployment
Service
apiVersion: v1
kind: Service
metadata:
name: vector-metrics
labels:
app: vector
spec:
type: ClusterIP
ports:
- port: 9598
targetPort: 9598
name: metrics
selector:
app: vector
ServiceMonitor (Prometheus Operator)
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: vector
labels:
app: vector
spec:
selector:
matchLabels:
app: vector
endpoints:
- port: metrics
interval: 30s
path: /metrics
PodMonitor
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: vector
spec:
selector:
matchLabels:
app: vector
podMetricsEndpoints:
- port: metrics
interval: 30s
path: /metrics
Prometheus Queries
Example PromQL queries for Vector metrics:
# Rate of events processed
rate(vector_events_processed_total[5m])
# P95 processing latency (histogram)
histogram_quantile(0.95, rate(vector_processing_duration_seconds_bucket[5m]))
# Memory usage by component
sum by (component) (vector_memory_bytes)
# Error rate
rate(vector_errors_total[5m])
# Events throughput by sink
sum by (sink) (rate(vector_sink_events_total[5m]))
Troubleshooting
Endpoint Not Accessible
If you can’t reach the /metrics endpoint:
- Verify the address binding (use
0.0.0.0 to listen on all interfaces)
- Check firewall rules and network policies
- Ensure the port is not already in use
- Review Vector logs for binding errors
- Test with
curl http://localhost:9598/metrics
No Metrics Appearing
If /metrics is empty:
- Verify metrics are flowing to the sink (check inputs)
- Check metric types are compatible
- Ensure metrics have been generated since Vector started
- Review Vector logs for errors
- Wait for the first scrape interval
High Memory Usage
For memory issues:
- Reduce
flush_period_secs to expire metrics faster
- Limit metric cardinality (reduce label combinations)
- Use histograms instead of summaries
- Filter high-cardinality labels before the exporter
- Reduce
buckets or quantiles array size
Metrics Disappearing
If metrics vanish between scrapes:
- Increase
flush_period_secs (must be > scrape interval)
- Ensure Prometheus scrape interval aligns with flush period
- Check if metrics are being continuously generated
- Review Vector logs for errors
Best Practices
- Set flush period > scrape interval to prevent metric expiration
- Use histograms over summaries for better aggregation
- Choose appropriate buckets based on your metric ranges
- Add a default namespace to avoid name collisions
- Enable authentication in production environments
- Use TLS for sensitive metrics
- Limit cardinality to prevent memory issues
- Monitor the exporter with Prometheus queries
- Use ServiceMonitor in Kubernetes with Prometheus Operator
- Document custom metrics for your team
Metric Cardinality
High cardinality can cause memory issues. Tips to manage:
- Limit label values: Avoid unbounded labels (user IDs, UUIDs)
- Aggregate before export: Use transforms to group metrics
- Drop high-cardinality labels: Remove unnecessary labels
- Use histograms: Bound distribution metrics
- Increase flush frequency: Expire metrics sooner
- Sample if needed: Reduce metric volume
# Drop high-cardinality label
[transforms.reduce_cardinality]
type = "remap"
inputs = ["metrics"]
source = '''
del(.tags.user_id) # Remove unbounded label
.tags.endpoint = replace(.tags.endpoint, r'/\d+/', "/id/") # Normalize IDs
'''
[sinks.prometheus]
type = "prometheus_exporter"
inputs = ["reduce_cardinality"]
address = "0.0.0.0:9598"
- Scrape interval: Balance freshness with overhead (15-60s typical)
- Flush period: Set to 2-3x scrape interval
- Histogram buckets: More buckets = more memory, choose wisely
- Summaries: More expensive than histograms, use sparingly
- Authentication: Minimal overhead, enable freely
- TLS: Small overhead, worth it for security
See Also