Skip to main content
Query Exporter can be deployed on Kubernetes clusters using Helm charts or manual YAML manifests.

Helm Chart

A community-maintained Helm chart is available for deploying Query Exporter.
1

Add the Helm repository

helm repo add makezbs https://makezbs.github.io/helm-charts/
helm repo update
2

Install the chart

helm install query-exporter makezbs/query-exporter \
  --namespace monitoring \
  --create-namespace
3

Verify the deployment

kubectl get pods -n monitoring
kubectl get svc -n monitoring

Helm Chart Repository

For more information and configuration options, see the Helm chart repository.

Manual Deployment with YAML

You can also deploy Query Exporter using Kubernetes manifests.

ConfigMap for Configuration

Create a ConfigMap to store your config.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
  name: query-exporter-config
  namespace: monitoring
data:
  config.yaml: |
    databases:
      postgres:
        dsn: postgresql://user:password@postgres-service:5432/mydb
        labels:
          env: production

    metrics:
      user_count:
        type: gauge
        description: Number of users in the database

    queries:
      users:
        interval: 60
        databases: [postgres]
        metrics: [user_count]
        sql: SELECT COUNT(*) as user_count FROM users

Deployment

Create a Deployment for Query Exporter:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: query-exporter
  namespace: monitoring
  labels:
    app: query-exporter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: query-exporter
  template:
    metadata:
      labels:
        app: query-exporter
    spec:
      containers:
      - name: query-exporter
        image: adonato/query-exporter:latest
        ports:
        - name: metrics
          containerPort: 9560
          protocol: TCP
        env:
        - name: QE_LOG_LEVEL
          value: "info"
        - name: QE_PROCESS_STATS
          value: "true"
        volumeMounts:
        - name: config
          mountPath: /config
          readOnly: true
        livenessProbe:
          httpGet:
            path: /metrics
            port: metrics
          initialDelaySeconds: 10
          periodSeconds: 30
        readinessProbe:
          httpGet:
            path: /metrics
            port: metrics
          initialDelaySeconds: 5
          periodSeconds: 10
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "500m"
      volumes:
      - name: config
        configMap:
          name: query-exporter-config

Service

Create a Service to expose the metrics endpoint:
apiVersion: v1
kind: Service
metadata:
  name: query-exporter
  namespace: monitoring
  labels:
    app: query-exporter
spec:
  type: ClusterIP
  ports:
  - port: 9560
    targetPort: metrics
    protocol: TCP
    name: metrics
  selector:
    app: query-exporter

ServiceMonitor for Prometheus Operator

If using Prometheus Operator, create a ServiceMonitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: query-exporter
  namespace: monitoring
  labels:
    app: query-exporter
spec:
  selector:
    matchLabels:
      app: query-exporter
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics

Complete Deployment Example

1

Create namespace

kubectl create namespace monitoring
2

Apply ConfigMap

Save the ConfigMap YAML to configmap.yaml and apply:
kubectl apply -f configmap.yaml
3

Apply Deployment

Save the Deployment YAML to deployment.yaml and apply:
kubectl apply -f deployment.yaml
4

Apply Service

Save the Service YAML to service.yaml and apply:
kubectl apply -f service.yaml
5

Verify deployment

Check that pods are running:
kubectl get pods -n monitoring -l app=query-exporter
Check the service:
kubectl get svc -n monitoring query-exporter
6

Test metrics endpoint

Port-forward to test locally:
kubectl port-forward -n monitoring svc/query-exporter 9560:9560
Access metrics at http://localhost:9560/metrics

Using Secrets for Sensitive Data

Store database credentials in Kubernetes Secrets:
apiVersion: v1
kind: Secret
metadata:
  name: database-credentials
  namespace: monitoring
type: Opaque
stringData:
  postgres-dsn: postgresql://user:password@postgres-service:5432/mydb
Reference the secret in your ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
  name: query-exporter-config
  namespace: monitoring
data:
  config.yaml: |
    databases:
      postgres:
        dsn: env:POSTGRES_DSN
        labels:
          env: production
    # ... rest of config
Add environment variable to Deployment:
env:
- name: POSTGRES_DSN
  valueFrom:
    secretKeyRef:
      name: database-credentials
      key: postgres-dsn

Multiple Configuration Files

You can split configuration across multiple ConfigMaps:
apiVersion: v1
kind: ConfigMap
metadata:
  name: query-exporter-databases
  namespace: monitoring
data:
  databases.yaml: |
    databases:
      postgres:
        dsn: postgresql://user:password@postgres-service:5432/mydb
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: query-exporter-queries
  namespace: monitoring
data:
  queries.yaml: |
    metrics:
      user_count:
        type: gauge
        description: Number of users
    queries:
      users:
        interval: 60
        databases: [postgres]
        metrics: [user_count]
        sql: SELECT COUNT(*) as user_count FROM users
Mount both ConfigMaps and use QE_CONFIG environment variable:
env:
- name: QE_CONFIG
  value: "/config/databases.yaml,/config/queries.yaml"
volumeMounts:
- name: databases
  mountPath: /config/databases.yaml
  subPath: databases.yaml
- name: queries
  mountPath: /config/queries.yaml
  subPath: queries.yaml
volumes:
- name: databases
  configMap:
    name: query-exporter-databases
- name: queries
  configMap:
    name: query-exporter-queries

Scaling Considerations

  • Query Exporter is typically deployed as a single replica since it maintains query state
  • For high availability, consider deploying multiple instances with different sets of queries
  • Use resource limits to prevent excessive resource consumption
  • Monitor the built-in metrics to track query performance and errors

Build docs developers (and LLMs) love