Collection Timeout
The collection timeout controls how long the exporter waits for database queries to complete during a scrape.Setting the Timeout
- Environment Variable
- Command Line Flag
- Docker
Default Value
The default timeout is 1 minute (1m). This is suitable for most deployments.
When to Adjust
Slow Databases
Slow Databases
If you see timeout errors in logs, increase the timeout:Common causes:
- High database load
- Complex queries in custom collectors
- Slow storage subsystem
- Large tables without proper indexes
Fast Scrapes Required
Fast Scrapes Required
For frequently scraped instances with fast databases, decrease the timeout:This prevents connections from stacking up if the database becomes temporarily slow.
Connection Pool Exhaustion
Connection Pool Exhaustion
Lower timeouts help prevent exhausting the database connection pool:When queries take too long, connections stack up. The timeout drops connections to free up resources.
How Timeout Works
- All running collector queries are canceled
- The database connection is closed
- Partial metrics collected before timeout are still exported
- Collectors that didn’t complete will report failures
Collector Selection
The exporter includes many collectors, but you may not need all of them. Disable unnecessary collectors to improve performance.Default Collectors
These collectors are enabled by default:| Collector | Metrics | Impact |
|---|---|---|
database | Database-level statistics | Low |
locks | Lock information | Medium |
replication | Replication status | Low |
replication_slot | Replication slot details | Low |
stat_bgwriter | Background writer stats | Low |
stat_database | Database statistics | Low |
stat_progress_vacuum | Vacuum progress | Low |
stat_user_tables | User table statistics | Medium-High |
statio_user_tables | Table I/O statistics | Medium-High |
wal | WAL statistics | Low |
Disabled by Default
These collectors are disabled by default due to performance impact or specialized use:| Collector | Purpose | When to Enable |
|---|---|---|
database_wraparound | Transaction ID wraparound | If you need wraparound monitoring |
long_running_transactions | Transactions exceeding threshold | To monitor slow queries |
postmaster | Postmaster process info | For detailed server monitoring |
process_idle | Idle connection info | To monitor connection pool usage |
stat_activity_autovacuum | Autovacuum activity | For vacuum tuning |
stat_checkpointer | Checkpoint statistics | For write performance tuning |
stat_statements | Query-level statistics | For query performance analysis (HIGH IMPACT) |
stat_wal_receiver | WAL receiver stats (standby) | On replica servers |
statio_user_indexes | Index I/O statistics | For index performance analysis |
xlog_location | Transaction log position | Advanced replication monitoring |
Enabling/Disabling Collectors
- Command Line
- Docker
Performance-Focused Configuration
For high-frequency scraping or large databases:Deep Monitoring Configuration
For detailed performance analysis:Query Performance (stat_statements)
Thepg_stat_statements collector provides query-level metrics but requires additional configuration.
Configuring stat_statements Collector
Excluding Databases or Users
PostgreSQL Configuration
Enable the extension in PostgreSQL:Optimizing Scrape Interval
Match your Prometheus scrape interval to your monitoring needs.Recommended Intervals
Disabling Default Metrics
For minimal resource usage or specialized monitoring:- Removes all built-in collectors
- Uses only custom queries from
queries.yaml - Useful for Greenplum, legacy PostgreSQL versions, or custom monitoring
Monitoring Exporter Performance
The exporter exposes its own metrics:Resource Limits
Set resource limits for containerized deployments:Performance Troubleshooting
High Scrape Duration
High Scrape Duration
Symptoms: Scrapes taking longer than expectedSolutions:
- Check
pg_scrape_collector_duration_secondsto identify slow collectors - Disable expensive collectors like
stat_user_tables - Increase
collection-timeoutif queries are timing out - Reduce
stat_statements.limitif using that collector
High Memory Usage
High Memory Usage
Symptoms: Exporter consuming excessive memorySolutions:
- Disable
stat_statementscollector - Reduce
stat_statements.limitandquery_length - Disable table-level collectors for databases with many tables
- Increase scrape interval to reduce collection frequency
Database Connection Pool Exhaustion
Database Connection Pool Exhaustion
Symptoms: “too many connections” errors in PostgreSQLSolutions:
- Lower
collection-timeoutto drop slow connections faster - Increase PostgreSQL
max_connections - Reduce Prometheus scrape frequency
- Use multi-target mode carefully (each scrape = new connection)
Collector Failures
Collector Failures
Symptoms:
pg_scrape_collector_success{collector="..."} is 0Solutions:- Check exporter logs for error messages
- Verify database user has necessary permissions
- Ensure required extensions (e.g., pg_stat_statements) are installed
- Check PostgreSQL version compatibility
Next Steps
Troubleshooting
Resolve common issues and errors
Security Best Practices
Secure your monitoring setup