This guide covers running unit tests, integration tests, and smoke tests.
Prerequisites
Assets must be built first: All tests require frontend assets to be built before running. Tests will fail if assets are missing or stale.
Build assets before testing:
Unit Tests
Run all unit tests:
This runs:
npm run assets && SKIP_INTEGRATION=1 go test ./...
The SKIP_INTEGRATION=1 flag skips integration tests that require Docker/Podman.
Run Tests Manually
Run tests with go test directly:
# All tests (after building assets)
go test ./...
# Run a specific test by name
go test -run TestClampIP ./internal/
# Run all tests in a specific package
go test ./lib/config/
# Verbose output
go test -v -run TestBotValid ./lib/config/
Test Coverage
Generate coverage report:
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
Integration Tests
Integration tests use Playwright to test Anubis in realistic browser scenarios.
Run All Integration Tests
This runs:
npm run assets && go test -v ./internal/test
Run with Specific Container Runtime
Use Podman:
npm run test:integration:podman
Use Docker:
npm run test:integration:docker
Integration tests require Docker or Podman to be installed and running.
Smoke Tests
Smoke tests validate Anubis against production-adjacent infrastructure configurations.
What Are Smoke Tests?
The test/ directory contains smoke tests - each is a folder with a test.sh script that:
- Sets up infrastructure (Caddy, nginx, Docker registry, etc.)
- Runs Anubis in that configuration
- Validates expected behavior
- Tears down infrastructure
Available Smoke Tests
test/
├── caddy/ # Caddy reverse proxy integration
├── default-config-macro/ # Default configuration validation
├── docker-registry/ # Docker registry protection
├── double_slash/ # URL path handling
├── forced-language/ # i18n language forcing
├── git-clone/ # Git clone over HTTP
├── git-push/ # Git push over HTTP
├── healthcheck/ # Health check endpoint
├── i18n/ # Internationalization
├── log-file/ # File logging
├── nginx/ # nginx reverse proxy
├── nginx-external-auth/ # nginx external auth module
├── palemoon/ # Pale Moon browser compatibility
├── robots_txt/ # robots.txt handling
├── unix-socket-xff/ # Unix socket with X-Forwarded-For
Run Smoke Tests
Smoke tests run automatically in GitHub Actions using .github/workflows/smoke-tests.yaml.
Run a specific smoke test manually:
Smoke tests modify system state (start containers, bind ports, etc.). Run them in a clean environment or CI.
Pale Moon Tests
Pale Moon browser has historically exposed bugs in Anubis’s JavaScript challenge code. The Pale Moon smoke tests ensure compatibility:
test/palemoon/amd64/test.sh
test/palemoon/i386/test.sh
These tests run a VNC-based Pale Moon instance and validate challenge solving.
Test Conventions
From CONTRIBUTING.md, tests should follow these patterns:
Table-Driven Tests
Use table-driven test patterns:
func TestExample(t *testing.T) {
tests := []struct {
name string
input string
want string
}{
{"case 1", "input1", "output1"},
{"case 2", "input2", "output2"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := ExampleFunc(tt.input)
if got != tt.want {
t.Errorf("got %v, want %v", got, tt.want)
}
)
}
}
Helper Functions
Use t.Helper() in test helpers:
func setupTest(t *testing.T) *TestContext {
t.Helper()
// setup code
return ctx
}
Cleanup
Use t.Cleanup() for teardown:
func TestWithResources(t *testing.T) {
db := setupDatabase(t)
t.Cleanup(func() {
db.Close()
)
// test code
}
Error Assertions
Use errors.Is for sentinel error validation:
err := SomeFunc()
if !errors.Is(err, ErrExpected) {
t.Errorf("expected ErrExpected, got %v", err)
}
Linting
Run static analysis tools:
Or run individually:
go vet ./...
go tool staticcheck ./...
go tool govulncheck ./...
Continuous Integration
All tests run automatically in GitHub Actions on:
- Every push
- Every pull request
- Scheduled nightly builds
Smoke tests run in .github/workflows/smoke-tests.yaml.