Testing is critical to maintaining the quality and reliability of Pipelines as Code. This guide covers everything you need to know about testing PAC.
Testing Philosophy
All code contributions must include appropriate tests. We maintain high test coverage to ensure reliability across multiple Git providers and Kubernetes environments.
Test Requirements
Use gotest.tools/v3 : Never use testify for assertions
Test all paths : Cover both success and error scenarios
Test provider variations : Ensure features work across GitHub, GitLab, Bitbucket, and Forgejo
Keep tests fast : Unit tests should run quickly; use mocks for external dependencies
Update golden files : When changing output formats, regenerate golden files
Test Types
PAC has three main types of tests:
Unit Tests
Test individual functions and packages in isolation.
Location : Alongside the code in pkg/ directories
Run unit tests :
Run without cache (force re-run all tests):
E2E Tests
End-to-end tests that validate complete workflows against real Git providers.
Location : test/ directory
Run E2E tests :
E2E tests require specific setup with Git provider credentials. Always ask the user to run E2E tests and provide output rather than running them yourself.
Gitea/Forgejo Tests
Self-contained E2E tests using a local Forgejo instance.
Why Gitea/Forgejo?
Most comprehensive test suite
Self-contained (no external dependencies)
Easier to debug than cloud provider tests
Perfect for local development
Writing Unit Tests
Basic Test Structure
Here’s a template for writing unit tests:
package mypackage
import (
" testing "
" gotest.tools/v3/assert "
" gotest.tools/v3/golden "
)
func TestMyFunction ( t * testing . T ) {
tests := [] struct {
name string
input string
expected string
wantErr bool
}{
{
name : "success case" ,
input : "test input" ,
expected : "test output" ,
wantErr : false ,
},
{
name : "error case" ,
input : "bad input" ,
wantErr : true ,
},
}
for _ , tt := range tests {
t . Run ( tt . name , func ( t * testing . T ) {
result , err := MyFunction ( tt . input )
if tt . wantErr {
assert . Assert ( t , err != nil )
return
}
assert . NilError ( t , err )
assert . Equal ( t , result , tt . expected )
})
}
}
Common Assertions
import " gotest.tools/v3/assert "
// Check for nil errors
assert . NilError ( t , err )
// Check error is not nil
assert . Assert ( t , err != nil )
// Check equality
assert . Equal ( t , actual , expected )
// Check deep equality (for structs, slices, maps)
assert . DeepEqual ( t , actual , expected )
// Custom assertions
assert . Assert ( t , len ( results ) > 0 , "expected results to be non-empty" )
assert . Assert ( t , strings . Contains ( output , "expected" ))
Using Golden Files
Golden files store expected test output for comparison:
import " gotest.tools/v3/golden "
func TestCLIOutput ( t * testing . T ) {
output := runCLICommand ()
// Compare output with golden file
golden . Assert ( t , output , "testdata/expected-output.golden" )
}
Update golden files when you intentionally change output:
Running Tests
Run All Unit Tests
# Run with default flags (-race -failfast)
make test
# Run without test cache
make test-no-cache
Run Specific Packages
# Run tests for a specific package
go test ./pkg/matcher/...
# Run tests matching a pattern
go test ./pkg/... -run TestMyFunction
# Run with verbose output
go test -v ./pkg/provider/github/...
Test Timeout
The default timeout for unit tests is 20 minutes . For E2E tests, it’s 45 minutes .
# Run with custom timeout
go test -timeout 10m ./pkg/...
Test Coverage
Generate coverage report
This creates an HTML report at tmp/c.out and opens it in your browser.
View coverage for specific packages
go test -coverprofile=coverage.out ./pkg/matcher/...
go tool cover -html=coverage.out
E2E Testing
Prerequisites
E2E tests require:
A running Kubernetes cluster (kind, minikube, etc.)
PAC installed on the cluster
Git provider credentials set as environment variables
Forgejo E2E Tests (Recommended)
Forgejo tests are the easiest to run locally because they’re self-contained.
Set up Forgejo with startpaac
cd startpaac
./startpaac -f # Install Forgejo
Default Forgejo settings:
Create a webhook forwarding URL
Provider-Specific E2E Tests
For GitHub, GitLab, and Bitbucket tests, you need to set up provider-specific environment variables. See the E2E on kind workflow for the complete list.
Debugging E2E Tests
Keep Test Resources
By default, E2E tests clean up after themselves. To keep the test namespace and resources:
export TEST_NOCLEANUP = true
make test-e2e
Watch Test Execution
# Watch PipelineRuns
watch kubectl get pipelineruns -A
# Follow controller logs
kubectl logs -n pipelines-as-code -l app.kubernetes.io/name=controller -f
# Check test namespace
kubectl get all -n < test-namespac e >
Replay Webhook Events
Save webhook events for debugging:
gosmee client --saveDir /tmp/webhooks https://hook.pipelinesascode.com/YOUR_ID http://localhost:8080
Replayed events are saved as shell scripts in /tmp/webhooks.
Test Naming Conventions
We enforce E2E test naming conventions to maintain consistency.
Check Test Naming
This verifies that E2E test names follow the project’s conventions.
Test Naming Rules
Use descriptive names that explain what’s being tested
Include the provider if provider-specific (e.g., TestGitHubPullRequest)
Use table-driven tests with named test cases
Avoid generic names like TestRun or TestProcess
Golden Files
Golden files store expected output for comparison in tests.
When to Use Golden Files
CLI command output
Generated YAML/JSON
Formatted text output
Complex struct comparisons
Updating Golden Files
Make your code changes
Modify the code that affects test output.
Run the update command
This regenerates all golden files with new output.
Review the changes
Verify that the changes are intentional and correct.
Commit the updated golden files
git add testdata/
git commit -m "test: update golden files for new output format"
E2E Golden Files
For E2E test golden files, see test/README.md .
Mocking
When to Mock
External API calls (Git providers, Kubernetes API)
File system operations
Time-dependent behavior
Network requests
Example Mock
type mockGitProvider struct {
createCommentFunc func ( string , string , int , string ) error
}
func ( m * mockGitProvider ) CreateComment ( owner , repo string , number int , comment string ) error {
if m . createCommentFunc != nil {
return m . createCommentFunc ( owner , repo , number , comment )
}
return nil
}
func TestWithMock ( t * testing . T ) {
mock := & mockGitProvider {
createCommentFunc : func ( owner , repo string , number int , comment string ) error {
assert . Equal ( t , owner , "openshift-pipelines" )
assert . Equal ( t , repo , "pipelines-as-code" )
return nil
},
}
// Use mock in your test
err := someFunction ( mock )
assert . NilError ( t , err )
}
CI Integration
Tests run automatically in CI for:
Every pull request
Every push to main
Release tags
CI Test Jobs
Unit tests : Run on every PR
Lint checks : Run on every PR
E2E tests : Run on PRs (GitHub, GitLab, Bitbucket, Forgejo)
Integration tests : Run on specific workflows
Pre-merge Requirements
All tests must pass before a PR can be merged. This includes:
Unit tests
Linting checks
E2E tests for all supported providers
Test Cleanup
Cleanup E2E Test Namespaces
If E2E tests leave behind namespaces or resources:
This removes leftover test namespaces and resources.
Common Testing Patterns
Table-Driven Tests
func TestMyFunction ( t * testing . T ) {
tests := [] struct {
name string
input Input
expected Output
wantErr bool
}{
{
name : "basic case" ,
input : Input { Value : "test" },
expected : Output { Result : "TEST" },
wantErr : false ,
},
{
name : "error case" ,
input : Input { Value : "" },
wantErr : true ,
},
}
for _ , tt := range tests {
t . Run ( tt . name , func ( t * testing . T ) {
result , err := MyFunction ( tt . input )
if tt . wantErr {
assert . Assert ( t , err != nil )
return
}
assert . NilError ( t , err )
assert . DeepEqual ( t , result , tt . expected )
})
}
}
Testing Error Cases
func TestErrorHandling ( t * testing . T ) {
_ , err := FunctionThatFails ()
// Check error is not nil
assert . Assert ( t , err != nil )
// Check error message
assert . ErrorContains ( t , err , "expected error text" )
// Check error type
assert . Assert ( t , errors . Is ( err , ErrExpectedType ))
}
Testing Kubernetes Resources
import (
" gotest.tools/v3/assert "
metav1 " k8s.io/apimachinery/pkg/apis/meta/v1 "
)
func TestPipelineRunCreation ( t * testing . T ) {
pr := createPipelineRun ()
assert . Equal ( t , pr . Name , "test-pipeline-run" )
assert . Equal ( t , pr . Namespace , "test-namespace" )
assert . Assert ( t , pr . Annotations != nil )
assert . Equal ( t , pr . Annotations [ "pipelinesascode.tekton.dev/on-event" ], "[pull_request]" )
}
Troubleshooting Tests
Tests Failing Locally but Passing in CI
Possible causes :
Outdated dependencies: Run make vendor
Stale test cache: Run make test-no-cache
Different Go version: Check go version matches CI
Golden File Mismatches
Problem : Tests fail with golden file differences
Solution :
Review the diff to ensure changes are intentional
Run make update-golden
Commit the updated golden files
E2E Tests Timing Out
Possible causes :
Cluster resources exhausted
Network connectivity issues
Webhook forwarding not working
Solutions :
Check cluster resources: kubectl top nodes
Verify webhook forwarding with gosmee
Increase timeout: go test -timeout 60m
Best Practices
Write tests as you write code (TDD approach)
Keep tests isolated and independent
Use descriptive test names
Test both success and failure paths
Mock external dependencies
Keep tests fast (< 1 second for unit tests)
Update golden files when changing output formats
Clean up resources in E2E tests
Next Steps
Architecture Understand the PAC architecture
Event Flows See how events flow through the system
Development Setup Set up your development environment
Contributing Learn about contributing to PAC