Skip to main content

Prerequisites

Before starting, ensure you have:
  • A running PostgreSQL server (versions 13-18 recommended)
  • Docker installed, or access to download the binary
  • Network access from the exporter to PostgreSQL (default port 5432)
  • PostgreSQL user credentials with monitoring permissions
This guide uses Docker for the fastest setup. See Installation for other methods.

Launch in 3 Steps

1

Start a Test PostgreSQL Database (Optional)

If you don’t have a PostgreSQL instance available, start one for testing:
docker run --name postgres-test \
  --net=host \
  -e POSTGRES_PASSWORD=password \
  -d postgres:16
This creates a PostgreSQL 16 instance on localhost:5432 with:
  • Username: postgres
  • Password: password
  • Database: postgres
This configuration is for testing only. Never use default credentials in production.
2

Run the PostgreSQL Exporter

Start the exporter container with connection parameters:
docker run -d \
  --name postgres-exporter \
  --net=host \
  -e DATA_SOURCE_URI="localhost:5432/postgres?sslmode=disable" \
  -e DATA_SOURCE_USER="postgres" \
  -e DATA_SOURCE_PASS="password" \
  quay.io/prometheuscommunity/postgres-exporter
  • DATA_SOURCE_URI: PostgreSQL connection string in format host:port/database?options
  • DATA_SOURCE_USER: PostgreSQL username with monitoring access
  • DATA_SOURCE_PASS: User password
  • --net=host: Shares host network (simplifies local testing)
  • Port 9187: Default metrics endpoint (automatically exposed)

Secure Production Configuration

For production deployments, use file-based secrets instead of environment variables:
# Create a password file
echo "your-secure-password" > /secure/postgres-password.txt
chmod 400 /secure/postgres-password.txt

docker run -d \
  --name postgres-exporter \
  -p 9187:9187 \
  -v /secure/postgres-password.txt:/secrets/password:ro \
  -e DATA_SOURCE_URI="postgres-server:5432/postgres?sslmode=require" \
  -e DATA_SOURCE_USER="postgres_exporter" \
  -e DATA_SOURCE_PASS_FILE="/secrets/password" \
  quay.io/prometheuscommunity/postgres-exporter
The container runs as uid/gid 65534 (nobody). Ensure mounted password files are readable by this user.
3

Verify Metrics Collection

Test the metrics endpoint to confirm the exporter is working:
curl http://localhost:9187/metrics
You should see Prometheus-formatted metrics including:
# HELP pg_up Whether the last scrape of metrics from PostgreSQL was able to connect to the server (1 for yes, 0 for no).
# TYPE pg_up gauge
pg_up 1

# HELP pg_stat_database_numbackends Number of backends currently connected to this database
# TYPE pg_stat_database_numbackends gauge
pg_stat_database_numbackends{datname="postgres"} 3

# HELP pg_stat_database_xact_commit Number of transactions in this database that have been committed
# TYPE pg_stat_database_xact_commit counter
pg_stat_database_xact_commit{datname="postgres"} 12847
  • pg_up{} 1: Exporter successfully connected to PostgreSQL
  • pg_static{version=...}: PostgreSQL version information
  • pg_stat_database_*: Database-level statistics (enabled by default)
  • pg_exporter_last_scrape_error 0: No collection errors

Configure Prometheus Scraping

Add the exporter to your Prometheus configuration:
scrape_configs:
  - job_name: 'postgres'
    static_configs:
      - targets: ['localhost:9187']
        labels:
          instance: 'postgres-prod-01'
          environment: 'production'
Replace localhost:9187 with your exporter’s hostname/IP if running on different servers.

Enable Optional Collectors

By default, the exporter enables essential collectors. Enable additional collectors using command-line flags:
docker run -d \
  --name postgres-exporter \
  --net=host \
  -e DATA_SOURCE_URI="localhost:5432/postgres?sslmode=disable" \
  -e DATA_SOURCE_USER="postgres" \
  -e DATA_SOURCE_PASS="password" \
  quay.io/prometheuscommunity/postgres-exporter \
  --collector.stat_statements \
  --collector.long_running_transactions \
  --collector.database_wraparound
  • --collector.stat_statements: Top queries by execution time (requires pg_stat_statements extension)
  • --collector.long_running_transactions: Detect queries exceeding thresholds
  • --collector.database_wraparound: Monitor transaction ID wraparound risk
  • --collector.stat_checkpointer: Checkpoint performance metrics
  • --collector.postmaster: PostgreSQL server process information
See the Configuration Reference for the complete collector list.

PostgreSQL User Setup

Create a dedicated monitoring user with appropriate permissions:
-- Create monitoring user
CREATE USER postgres_exporter WITH PASSWORD 'secure-password';

-- Grant monitoring role (PostgreSQL 10+)
GRANT pg_monitor TO postgres_exporter;

-- Grant database connection
GRANT CONNECT ON DATABASE postgres TO postgres_exporter;
AWS RDS/Aurora Users: You cannot create functions as non-superuser. Use the master user or grant the monitoring user to the master user:
GRANT postgres_exporter TO <RDS_MASTER_USER>;

Verify Full Setup

Run these checks to confirm everything is working:
1

Check Exporter Logs

docker logs postgres-exporter
Look for:
level=info msg="Established new database connection" server="localhost:5432"
level=info msg="Semantic version changed" from="0.0.0" to="16.4.0"
2

Test Metrics Endpoint

curl -s http://localhost:9187/metrics | grep pg_up
Expected output:
pg_up 1
3

Query from Prometheus

Open Prometheus UI at http://localhost:9090 and run:
pg_stat_database_numbackends{datname="postgres"}
You should see current connection counts.

Common Issues

Connection to PostgreSQL failed. Check:
  • PostgreSQL is running and accessible: psql -h localhost -U postgres
  • Credentials are correct in DATA_SOURCE_USER and DATA_SOURCE_PASS
  • Network connectivity: telnet postgres-host 5432
  • pg_hba.conf allows connections from exporter IP
  • Firewall rules allow port 5432
  • Check collector is enabled: docker exec postgres-exporter ps aux
  • Verify PostgreSQL version compatibility (some collectors require PostgreSQL 10+)
  • Check user permissions: SELECT * FROM pg_stat_activity should return results
  • For PostgreSQL 10+: Run GRANT pg_monitor TO postgres_exporter;
  • For PostgreSQL 9.x: Create helper functions (see PostgreSQL User Setup above)
  • Check SEARCH_PATH includes monitoring schema
  • Disable expensive collectors you don’t need
  • Increase PG_EXPORTER_COLLECTION_TIMEOUT if queries are timing out
  • For stat_statements collector, reduce --collector.stat_statements.limit

Next Steps

Installation Methods

Binary installation, Kubernetes, and from-source builds

Configuration Reference

Complete guide to all flags, collectors, and environment variables

Multi-Target Setup

Monitor multiple PostgreSQL instances from one exporter

Prometheus Queries

Example PromQL queries for common monitoring scenarios

Build docs developers (and LLMs) love