Skip to main content

Overview

Adapt follows a pragmatic testing approach: add tests where they provide value for preventing regressions, especially during refactoring or bug fixes. We use table-driven tests, sqlmock for database operations, and interface-based mocking for dependencies.

Running Tests

Quick Commands

# Run all tests
./run-tests.sh

CI/CD Pipeline

GitHub Actions runs comprehensive checks on every push/PR:
  • Linting - golangci-lint v2.9.0 with Go 1.26 support
  • Unit Tests - All package tests
  • Integration Tests - Database and API tests
  • Coverage Reporting - Reports to Codecov

Testing Patterns

Table-Driven Tests

The recommended pattern for testing multiple scenarios:
func TestValidateInput(t *testing.T) {
    tests := []struct {
        name        string
        input       string
        expectError bool
    }{
        {"valid_input", "test", false},
        {"invalid_input", "", true},
        {"special_chars", "test@123", false},
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            err := validateInput(tt.input)
            if tt.expectError {
                assert.Error(t, err)
            } else {
                assert.NoError(t, err)
            }
        })
    }
}

Interface-Based Testing

For API endpoints using dependency injection:
// For API endpoints using dependency injection
mockDB := new(MockDBClient)
mockJobsManager := new(MockJobManager)
handler := NewHandler(mockDB, mockJobsManager)

mockDB.On("GetOrCreateUser", userID, email, nil).Return(user, nil)
mockJobsManager.On("CreateJob", ctx, opts).Return(job, nil)

// Execute test
response := handler.HandleRequest(request)

// Verify expectations
mockDB.AssertExpectations(t)
mockJobsManager.AssertExpectations(t)

Sqlmock Testing

For functions with direct SQL queries:
// For functions with direct SQL queries
mockSQL, mock, err := sqlmock.New()
require.NoError(t, err)
defer mockSQL.Close()

mock.ExpectQuery(`SELECT.*FROM jobs WHERE id = \$1`).
    WithArgs(jobID).
    WillReturnRows(sqlmock.NewRows([]string{"id", "status"}).
        AddRow(jobID, "completed"))

result, err := getJob(mockSQL, jobID)
assert.NoError(t, err)
assert.Equal(t, "completed", result.Status)
assert.NoError(t, mock.ExpectationsWereMet())

Database Testing

For testing database operations:
func TestDatabaseOperation(t *testing.T) {
    db, mock, err := sqlmock.New()
    require.NoError(t, err)
    defer db.Close()

    mock.ExpectExec("CREATE TABLE").WillReturnResult(sqlmock.NewResult(0, 0))

    err = createTable(db)
    assert.NoError(t, err)
    assert.NoError(t, mock.ExpectationsWereMet())
}

Test File Structure

Organise tests alongside implementation:
internal/
├── api/
│   ├── handlers.go
│   ├── handlers_test.go       # Tests for handlers
│   ├── jobs.go
│   ├── jobs_test.go           # Tests for job endpoints
│   └── test_mocks.go          # Shared mocks for api package
├── db/
│   ├── db.go
│   ├── db_test.go
│   ├── queue.go
│   ├── queue_test.go
│   └── test_mocks.go          # Shared mocks for db package
└── jobs/
    ├── manager.go
    ├── manager_test.go
    ├── worker.go
    ├── worker_test.go
    └── test_mocks.go          # Shared mocks for jobs package
Guidelines:
  • test_mocks.go - Shared mocks and utilities per package
  • *_test.go - Focused tests per file or endpoint group
  • Use sqlmock for direct SQL query testing
  • Use interface mocks for dependency injection testing

Coverage Approach

We follow a pragmatic coverage strategy:

When to Add Tests

New Functions

Add tests where they provide value for preventing regressions:
  • Complex business logic
  • Critical user workflows
  • Edge cases and error handling
  • Functions likely to change

Existing Functions

Add tests opportunistically:
  • During refactoring
  • When fixing bugs
  • When adding new features
  • When reducing complexity

What to Test

High Priority:
  • Job creation and lifecycle management
  • Task claiming and processing logic
  • Database operations (CRUD, transactions)
  • Authentication and authorization
  • Critical business logic
  • Error handling and recovery
Lower Priority:
  • Simple getter/setter functions
  • Trivial utility functions
  • UI rendering code
  • Third-party library wrappers

Extract + Test + Commit Pattern

When refactoring large functions, follow this proven methodology:
1

Analyse Structure

Map distinct responsibilities within the function:
  • Identify logical sections
  • Find reusable patterns
  • Spot potential extractions
2

Extract Functions

Pull out focused, single-responsibility functions:
// Before: 200-line monolith
func CreateJob(opts JobOptions) (*Job, error) {
    // ... 200 lines of mixed concerns
}

// After: focused functions
func validateJobOptions(opts JobOptions) error { ... }
func setupJobDatabase(job *Job) error { ... }
func setupJobURLDiscovery(job *Job) error { ... }
3

Create Tests

Write comprehensive tests with table-driven patterns:
func TestValidateJobOptions(t *testing.T) {
    tests := []struct {
        name    string
        opts    JobOptions
        wantErr bool
    }{
        {"valid_options", validOpts, false},
        {"missing_domain", invalidOpts, true},
    }
    // ...
}
4

Commit Steps

Commit each extraction separately:
  • feat: extract validateJobOptions function
  • test: add tests for validateJobOptions
  • refactor: use validateJobOptions in CreateJob
5

Verify Integration

Ensure no regressions:
  • Run full test suite
  • Test manually if needed
  • Check coverage reports

Recent Refactoring Success

5 monster functions eliminated:
  • getJobTasks: 216 → 56 lines (74% reduction)
  • CreateJob: 232 → 42 lines (82% reduction)
  • setupJobURLDiscovery: 108 → 17 lines (84% reduction)
  • setupSchema: 216 → 27 lines (87% reduction)
  • WarmURL: 377 → 68 lines (82% reduction)
Results: 80% complexity reduction, 350+ tests created during refactoring

Manual Testing

API Testing

Use HTTP clients to test endpoints:
# Install httpie or use curl
pip install httpie

# Test health endpoint
http GET localhost:8847/health

# Test job creation (requires auth token)
http POST localhost:8847/v1/jobs \
  Authorization:"Bearer your-jwt-token" \
  domain=example.com \
  use_sitemap:=true

Job Queue Testing

Test the job queue system:
# Run job queue test utility
go run ./cmd/test_jobs/main.go

Integration Tests

Integration tests require a test database:
# Create test database
createb adaptappgoodnative_test

# Set environment variables
export DATABASE_URL="postgres://localhost/adaptappgoodnative_test"
export RUN_INTEGRATION_TESTS=true

# Run integration tests
go test ./...

Debugging Tests

Verbose Output

# Run with verbose output
go test -v ./...

# Run specific test with verbose output
go test -v -run TestJobCreation ./internal/jobs/

Race Detection

# Run with race detection
go test -race ./...

Memory Profiling

# Profile memory usage
go test -memprofile=mem.prof ./...
go tool pprof mem.prof

Best Practices

Test Isolation

  • Each test should be independent
  • Use table-driven tests for multiple scenarios
  • Clean up resources in defer statements
  • Don’t rely on test execution order

Mock Management

  • Share common mocks in test_mocks.go
  • Use interface-based mocking for flexibility
  • Verify all mock expectations are met
  • Keep mocks simple and focused

Error Testing

  • Test both success and failure paths
  • Verify error messages and types
  • Test edge cases and boundary conditions
  • Include timeout and cancellation scenarios

Coverage Mindset

  • Don’t chase 100% coverage
  • Focus on high-value tests
  • Add tests during refactoring
  • Test critical paths thoroughly

Troubleshooting

Tests Hanging

Check for:
  • Unclosed database connections
  • Missing context cancellation
  • Goroutine leaks
  • Deadlocks in concurrent code

Flaky Tests

Common causes:
  • Race conditions (use -race flag)
  • Timing dependencies (use proper synchronisation)
  • Shared state between tests
  • External dependencies (mock them)

Mock Errors

If mock expectations fail:
  • Verify exact arguments match
  • Check call order for ordered expectations
  • Ensure all expected calls are made
  • Review mock setup in test

Next Steps

Build docs developers (and LLMs) love