Overview
Three primary parameters impact Tekton controller performance:- ThreadsPerController - Number of goroutines for processing the work queue
- QPS (Queries Per Second) - Maximum queries to the Kubernetes API server
- Burst - Maximum burst for API throttling
Default Values
Out-of-the-box configuration:| Parameter | Default Value | Source |
|---|---|---|
| ThreadsPerController | 2 | knative/pkg |
| QPS | 5.0 | client-go |
| Burst | 10 | client-go |
QPS and Burst values are multiplied by 2 internally, so the actual values are double the configured values.
Configuration Methods
Performance parameters can be configured using:- Command-line flags in the controller deployment
- Environment variables
Command-line flags take precedence over environment variables.
Configuring via Command-Line Flags
Modify the controller deployment inconfig/controller.yaml:
Available Flags
Number of threads (goroutines) to create per controller for processing the work queue.Higher values increase parallelism but consume more memory.
Maximum queries per second to the Kubernetes API server from this client.Note: Actual QPS is multiplied by 2 internally.With this configuration, actual QPS = 100.
Maximum burst for throttling API requests.Note: Actual burst is multiplied by 2 internally.With this configuration, actual burst = 100.
Configuring via Environment Variables
Alternatively, use environment variables:Environment variable for threads per controller.
Environment variable for API queries per second.
Environment variable for API burst throttling.
Performance Tuning Guidelines
Small Deployments (< 100 PipelineRuns/day)
Use default values:Medium Deployments (100-1000 PipelineRuns/day)
Increase concurrency:Large Deployments (> 1000 PipelineRuns/day)
Maximize throughput:High-Concurrency Scenarios
For clusters with many simultaneous PipelineRuns:Resource Requirements
Adjust controller resource limits based on performance configuration:Resource Scaling Guidelines
| Threads | CPU Request | Memory Request | CPU Limit | Memory Limit |
|---|---|---|---|---|
| 2 (default) | 100m | 256Mi | 500m | 512Mi |
| 16 | 250m | 512Mi | 1000m | 1Gi |
| 32 | 500m | 512Mi | 2000m | 2Gi |
| 64 | 1000m | 1Gi | 4000m | 4Gi |
Monitoring Performance
Key Metrics
Monitor these metrics to assess controller performance:Performance Indicators
Good Performance:- Work queue depth remains low (< 10)
- Reconciliation latency < 1s (p95)
- API client latency < 100ms (p95)
- No throttling errors in logs
- Work queue depth grows unbounded
- Reconciliation latency > 5s (p95)
- API client latency > 500ms (p95)
- Frequent “rate limit exceeded” errors
Troubleshooting
High Work Queue Depth
Symptoms: Work queue depth metric increases continuously Solutions:- Increase
threads-per-controller - Verify API server health
- Check for slow reconciliation (enable debug logging)
API Rate Limiting
Symptoms: Logs show “rate limit exceeded” or “client rate limiter Wait” Solutions:- Increase
kube-api-qpsandkube-api-burst - Verify API server capacity
- Consider cluster API server scaling
High Memory Usage
Symptoms: Controller pods are OOMKilled or approaching memory limits Solutions:- Increase memory limits
- Reduce
threads-per-controllerif excessively high - Check for memory leaks (file issue if found)
Slow Reconciliation
Symptoms: PipelineRuns take long to start or complete Solutions:- Enable debug logging to identify bottlenecks
- Increase
threads-per-controller - Verify webhook performance
- Check node and pod resource availability
Complete High-Performance Configuration
Best Practices
- Start with defaults and increase gradually based on metrics
- Match thread count to workload - Don’t over-provision
- Monitor API server impact when increasing QPS/Burst
- Adjust resource limits proportionally with thread count
- Enable HA for production deployments (3+ replicas)
- Use HPA for webhook to handle variable load
- Test changes in non-production environment first
- Monitor continuously after configuration changes
- Document tuning decisions for future reference
- Review quarterly and adjust based on workload changes