Skip to main content
Kosh uses a comprehensive testing strategy to ensure reliability, performance, and correctness.

Testing Strategy

Kosh employs multiple testing approaches:
  1. Unit Tests - Test individual functions and methods
  2. Integration Tests - Test service interactions
  3. Performance Tests - Benchmark before/after changes
  4. Race Detection - Detect concurrent access issues
  5. End-to-End - Full build pipeline validation

Running Tests

All Tests

# Run all tests
go test ./...

# Run with verbose output
go test -v ./...

# Run with coverage
go test -cover ./...

# Generate coverage report
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out

Specific Tests

# Run tests in a specific package
go test ./builder/search -v

# Run a specific test function
go test ./builder/search -run TestStemmer -v

# Run tests matching a pattern
go test ./... -run TestCache

Race Detection

Always run race detection before committing changes that involve concurrency.
# Run all tests with race detector
go test -race ./...

# Run specific package with race detector
go test -race ./builder/services
The race detector helps find:
  • Concurrent map access
  • Unsynchronized shared variable access
  • Data races in worker pools

Benchmarks

# Run all benchmarks
go test -bench=. -benchmem ./builder/benchmarks/

# Run specific benchmark
go test -bench=BenchmarkSearch -benchmem ./builder/search/

# Compare before/after
go test -bench=. -benchmem ./... > old.txt
# Make changes...
go test -bench=. -benchmem ./... > new.txt
benchcmp old.txt new.txt
Available benchmarks:
  • Search performance (BM25, fuzzy, phrase)
  • Hash computation (BLAKE3)
  • Sorting algorithms
  • Tokenization
  • Snippet extraction

Writing Unit Tests

Test Structure

Follow the table-driven test pattern:
builder/search/search_test.go
func TestStemmer(t *testing.T) {
    tests := []struct {
        name     string
        input    string
        expected string
    }{
        {"present participle", "running", "run"},
        {"adverb", "easily", "easili"},
        {"gerund", "processing", "process"},
        {"past tense", "agreed", "agre"},
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            result := Stem(tt.input)
            if result != tt.expected {
                t.Errorf("Stem(%q) = %q, want %q", 
                         tt.input, result, tt.expected)
            }
        })
    }
}

Test Helpers

Create reusable test utilities in testutil package:
builder/testutil/helpers.go
// CreateTestConfig creates a minimal config for testing
func CreateTestConfig() *config.Config {
    return &config.Config{
        BaseURL:    "http://localhost:2604",
        ContentDir: "content",
        OutputDir:  "public",
    }
}

// CreateTempDir creates a temporary directory for testing
func CreateTempDir(t *testing.T) string {
    t.Helper()
    dir, err := os.MkdirTemp("", "kosh-test-*")
    if err != nil {
        t.Fatalf("Failed to create temp dir: %v", err)
    }
    t.Cleanup(func() {
        os.RemoveAll(dir)
    })
    return dir
}

Testing with Context

Always test context cancellation:
func TestPostService_ProcessWithCancellation(t *testing.T) {
    // Setup
    ctx, cancel := context.WithCancel(context.Background())
    service := setupTestService(t)
    
    // Cancel immediately
    cancel()
    
    // Test
    _, err := service.Process(ctx, false, false, false)
    
    // Assert
    if err == nil {
        t.Error("Expected error due to cancelled context")
    }
    if !errors.Is(err, context.Canceled) {
        t.Errorf("Expected context.Canceled, got %v", err)
    }
}

Testing Error Handling

func TestCacheService_GetPost_NotFound(t *testing.T) {
    // Setup
    cache := setupTestCache(t)
    
    // Test
    post, err := cache.GetPost("nonexistent-id")
    
    // Assert
    if err == nil {
        t.Error("Expected error for nonexistent post")
    }
    if post != nil {
        t.Error("Expected nil post for nonexistent ID")
    }
}

Integration Tests

Testing Service Interactions

builder/services/post_service_test.go
func TestPostService_EndToEnd(t *testing.T) {
    // Setup all dependencies
    cfg := testutil.CreateTestConfig()
    logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
    
    // Create mock cache
    mockCache := &mocks.MockCacheService{
        Posts: make(map[string]*cache.PostMeta),
    }
    
    // Create mock renderer
    mockRenderer := &mocks.MockRenderService{
        Assets: make(map[string]string),
    }
    
    // Create service with dependencies
    service := NewPostService(cfg, mockCache, mockRenderer, 
                              logger, metrics, md, renderer, 
                              sourceFs, destFs, adapter)
    
    // Test
    result, err := service.Process(context.Background(), false, false, false)
    
    // Assert
    if err != nil {
        t.Fatalf("Process() error = %v", err)
    }
    if len(result.AllPosts) == 0 {
        t.Error("Expected posts to be processed")
    }
}

Using Mocks

Create mock implementations for testing:
builder/services/mocks/cache_service_mock.go
type MockCacheService struct {
    Posts   map[string]*cache.PostMeta
    Records map[string]*cache.SearchRecord
    mu      sync.RWMutex
}

func (m *MockCacheService) GetPost(id string) (*cache.PostMeta, error) {
    m.mu.RLock()
    defer m.mu.RUnlock()
    
    post, ok := m.Posts[id]
    if !ok {
        return nil, fmt.Errorf("post not found: %s", id)
    }
    return post, nil
}

func (m *MockCacheService) StorePost(post *cache.PostMeta) error {
    m.mu.Lock()
    defer m.mu.Unlock()
    
    m.Posts[post.ID] = post
    return nil
}

Performance Testing

Writing Benchmarks

func BenchmarkStemmer(b *testing.B) {
    words := []string{"running", "processing", "easily", "agreed"}
    
    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        for _, word := range words {
            Stem(word)
        }
    }
}

func BenchmarkSearchBM25(b *testing.B) {
    // Setup
    index := createTestIndex(b)
    query := "machine learning transformer"
    
    b.ResetTimer()
    b.ReportAllocs()
    
    for i := 0; i < b.N; i++ {
        _, _ = index.Search(query, 10)
    }
}

Memory Profiling

# Profile memory allocations
go test -bench=BenchmarkSearch -benchmem -memprofile=mem.prof
go tool pprof mem.prof

# Commands in pprof:
(pprof) top10          # Top 10 memory consumers
(pprof) list Search    # Show allocations in Search function
(pprof) web            # Visual graph (requires graphviz)

CPU Profiling

# Profile CPU usage
go test -bench=BenchmarkBuild -cpuprofile=cpu.prof
go tool pprof cpu.prof

# Commands in pprof:
(pprof) top10          # Top 10 CPU consumers
(pprof) list Build     # Show CPU time in Build function
Always profile before and after performance optimizations to verify improvements.

Testing Concurrency

Worker Pool Tests

builder/utils/worker_pool_test.go
func TestWorkerPool_Concurrent(t *testing.T) {
    ctx := context.Background()
    var processed int32
    
    pool := NewWorkerPool(ctx, 4, func(task int) {
        atomic.AddInt32(&processed, 1)
        time.Sleep(10 * time.Millisecond)
    })
    
    pool.Start()
    
    // Submit 100 tasks
    for i := 0; i < 100; i++ {
        pool.Submit(i)
    }
    
    pool.Stop()
    
    if atomic.LoadInt32(&processed) != 100 {
        t.Errorf("Expected 100 tasks processed, got %d", processed)
    }
}

Race Detection Example

func TestCacheService_ConcurrentAccess(t *testing.T) {
    cache := setupTestCache(t)
    
    var wg sync.WaitGroup
    
    // Concurrent writes
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            post := &cache.PostMeta{
                ID:    fmt.Sprintf("post-%d", id),
                Title: fmt.Sprintf("Post %d", id),
            }
            if err := cache.StorePost(post); err != nil {
                t.Errorf("StorePost() error = %v", err)
            }
        }(i)
    }
    
    // Concurrent reads
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            _, _ = cache.GetPost(fmt.Sprintf("post-%d", id))
        }(i)
    }
    
    wg.Wait()
}

End-to-End Testing

Full Build Pipeline

func TestFullBuildPipeline(t *testing.T) {
    // Setup
    tempDir := testutil.CreateTempDir(t)
    cfg := &config.Config{
        ContentDir: filepath.Join(tempDir, "content"),
        OutputDir:  filepath.Join(tempDir, "public"),
        BaseURL:    "http://localhost:2604",
    }
    
    // Create test content
    createTestMarkdownFiles(t, cfg.ContentDir)
    
    // Build
    builder := NewBuilder(cfg)
    ctx := context.Background()
    
    err := builder.Build(ctx, false)
    if err != nil {
        t.Fatalf("Build() error = %v", err)
    }
    
    // Verify output
    verifyOutputFiles(t, cfg.OutputDir)
    verifySearchIndex(t, cfg.OutputDir)
    verifyStaticAssets(t, cfg.OutputDir)
}

Test Coverage Goals

  • Core packages (services, cache, search): >80% coverage
  • Utilities: >70% coverage
  • CLI commands: Integration tests for all commands
  • Critical paths: 100% coverage (error handling, security)

Checking Coverage

# Generate coverage report
go test -coverprofile=coverage.out ./...

# View in terminal
go tool cover -func=coverage.out

# View in browser
go tool cover -html=coverage.out

# Check coverage percentage
go test -cover ./... | grep coverage

Test Organization

File Naming

service.go          # Implementation
service_test.go     # Unit tests
service_bench_test.go # Benchmarks (optional separate file)

Package Organization

builder/
├── services/
│   ├── post_service.go
│   ├── post_service_test.go
│   ├── cache_service.go
│   ├── cache_service_test.go
│   └── mocks/              # Mock implementations
│       ├── cache_service_mock.go
│       └── render_service_mock.go
├── testutil/                # Shared test utilities
│   ├── helpers.go
│   └── fixtures.go
└── benchmarks/              # Performance benchmarks
    └── benchmarks_test.go

CI/CD Testing

GitHub Actions Workflow

.github/workflows/test.yml
name: Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Set up Go
        uses: actions/setup-go@v4
        with:
          go-version: '1.23'
      
      - name: Run tests
        run: go test -v -race -coverprofile=coverage.out ./...
      
      - name: Run linter
        run: golangci-lint run
      
      - name: Upload coverage
        uses: codecov/codecov-action@v3
        with:
          files: ./coverage.out

Best Practices

1

Write tests first

Consider TDD (Test-Driven Development) for new features:
  1. Write failing test
  2. Implement minimal code to pass
  3. Refactor while keeping tests green
2

Test behavior, not implementation

Focus on what the code does, not how it does it:
// Good - tests behavior
result := cache.GetPost("id")
assert.Equal(t, "Expected Title", result.Title)

// Bad - tests implementation
assert.True(t, cache.usedBoltDB)
3

Use table-driven tests

Makes adding test cases easy:
tests := []struct {
    name     string
    input    string
    expected string
}{
    // Add cases here
}
4

Clean up resources

Use t.Cleanup() for automatic cleanup:
dir := createTempDir()
t.Cleanup(func() {
    os.RemoveAll(dir)
})
5

Test error cases

Don’t just test the happy path:
_, err := service.Process(ctx, true)
if err == nil {
    t.Error("Expected error for invalid input")
}

Build docs developers (and LLMs) love