Aceplay uses Go’s built-in testing framework along with Testify for assertions. This guide covers running tests, generating coverage reports, and maintaining code quality.
Running Tests
All Tests
Run the complete test suite with race detection:
This executes:
Flags:
-v - Verbose output showing each test
-race - Enables race detector to catch concurrency issues
./... - Runs tests in all packages recursively
Short Tests
Run only quick tests, skipping slow integration tests:
Executes:
Use this for rapid feedback during development.
Specific Package
Test a single package:
go test -v ./internal/config
Or with the full path:
go test -v ./internal/config/...
Specific Test Function
Run a single test by name:
go test -v -run TestFunctionName ./internal/acestream
Example:
go test -v -run TestParseURL ./pkg/acestream
Use regex patterns:
# Run all tests starting with TestConfig
go test -v -run "^TestConfig" ./internal/config
Test Coverage
Generate Coverage Report
Run tests with coverage analysis:
This:
Runs tests with race detection
Generates coverage.out profile
Creates coverage.html for browser viewing
Commands executed:
go test -race -coverprofile=coverage.out -covermode=atomic ./...
go tool cover -html=coverage.out -o coverage.html
View Coverage in Browser
After running make test-coverage:
xdg-open coverage.html # Linux
open coverage.html # macOS
Coverage for Specific Package
go test -race -coverprofile=coverage.out ./internal/config
go tool cover -html=coverage.out
Terminal Coverage Summary
Shows coverage percentage for each package.
Code Quality Checks
Automatically format all Go code:
Runs:
This ensures consistent formatting across the codebase.
Static Analysis
Run Go’s built-in static analyzer:
Executes:
Detects:
Suspicious constructs
Common mistakes
Potential bugs
Unreachable code
Lint Code
Run the comprehensive linter:
Uses golangci-lint to check:
Code style violations
Best practice violations
Performance issues
Security concerns
Install golangci-lint:
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $( go env GOPATH ) /bin
Run All Checks
Run all code quality checks before committing:
This runs in sequence:
make fmt - Format code
make vet - Static analysis
make lint - Linting
make test - Full test suite
Always run make check before creating a pull request to ensure your changes meet quality standards.
Writing Tests
Test File Naming
Follow Go conventions:
Test file: module_test.go
Same package as code being tested
Place alongside source files
Example:
internal/config/
├── config.go
└── config_test.go
Using Testify
Aceplay uses Testify for assertions:
import (
" testing "
" github.com/stretchr/testify/assert "
" github.com/stretchr/testify/require "
)
func TestConfigLoad ( t * testing . T ) {
cfg , err := LoadConfig ( "testdata/config.yaml" )
require . NoError ( t , err ) // Fail immediately if error
assert . Equal ( t , "mpv" , cfg . Player )
assert . NotNil ( t , cfg . Engine )
}
When to use:
require.* - Test should fail immediately if assertion fails
assert.* - Continue running other assertions
Table-Driven Tests
Preferred pattern for testing multiple cases:
func TestParseURL ( t * testing . T ) {
tests := [] struct {
name string
input string
wantID string
wantErr bool
}{
{
name : "valid acestream URL" ,
input : "acestream://abcd1234" ,
wantID : "abcd1234" ,
wantErr : false ,
},
{
name : "invalid protocol" ,
input : "http://example.com" ,
wantID : "" ,
wantErr : true ,
},
}
for _ , tt := range tests {
t . Run ( tt . name , func ( t * testing . T ) {
id , err := ParseURL ( tt . input )
if tt . wantErr {
assert . Error ( t , err )
} else {
assert . NoError ( t , err )
assert . Equal ( t , tt . wantID , id )
}
})
}
}
Test Helpers
Create helper functions for common test setup:
func setupTestConfig ( t * testing . T ) * Config {
t . Helper () // Mark as helper
cfg := NewConfig ()
cfg . Player = "mpv"
return cfg
}
func TestSomething ( t * testing . T ) {
cfg := setupTestConfig ( t )
// Use cfg in test
}
Testing Guidelines from AGENTS.md
Key principles from the development guide:
Test Structure
Use table-driven tests for multiple test cases
Test file naming: module_test.go
Use testify assertions (assert, require)
Naming Conventions
func TestFunctionName ( t * testing . T ) { // Test function
tests := [] struct { // Table-driven
name string // Test case name
input string
want string
}{
{ "case 1" , "input1" , "expected1" },
{ "case 2" , "input2" , "expected2" },
}
for _ , tt := range tests {
t . Run ( tt . name , func ( t * testing . T ) {
result := MyFunction ( tt . input )
assert . Equal ( t , tt . want , result )
})
}
}
Error Handling in Tests
// Good - explicit error checking
result , err := Function ()
if err != nil {
t . Fatalf ( "unexpected error: %v " , err )
}
// Better - with testify
result , err := Function ()
require . NoError ( t , err )
assert . Equal ( t , expected , result )
Continuous Integration
Prepare for CI pipeline:
Runs:
This ensures:
Dependencies are downloaded
Code is formatted
Static analysis passes
All tests pass
Troubleshooting
Race Detector Failures
If you see race condition warnings:
Fix the concurrency issue. Never ignore race warnings.
Test Timeout
For slow tests, increase timeout:
go test -timeout 5m ./...
Flaky Tests
If tests fail intermittently:
Check for race conditions (-race flag)
Look for timing dependencies
Ensure proper cleanup in defer or t.Cleanup()
Coverage Too Low
Aim for reasonable coverage:
Critical paths: 80%+ coverage
Utilities: 70%+ coverage
UI components: Best effort
Add tests for:
Error conditions
Edge cases
Common usage patterns
Next Steps
Contributing Guidelines Learn the full contribution workflow including code style and pull request process