Deno has a comprehensive test suite including unit tests, integration tests, and web platform tests.
Test Organization
Test Types
Spec Tests Integration tests using __test__.jsonc files Location: tests/specs/
Unit Tests Inline Rust tests with source code Location: Inline in *.rs files
Integration Tests Additional integration tests Location: cli/tests/
Web Platform Tests Standards compliance tests Location: tests/wpt/
Running Tests
All Tests
Running all tests takes a significant amount of time (30+ minutes).
Filtered Tests
# Filter tests by name
cargo test http_server
# Run tests matching a pattern
cargo test "fetch_*"
Package-Specific Tests
# Run tests in a specific package
cargo test -p deno_core
cargo test -p deno_runtime
cargo test -p deno_fetch
# Run CLI integration tests only
cargo test --bin deno
Spec Tests
# Run all spec tests
cargo test specs
# Run a specific spec test
cargo test spec::run::basic
# Run spec tests for a specific command
cargo test spec::lint
cargo test spec::fmt
Single Test
# Run a specific test by exact name
cargo test test_name_here --exact
# Run with output
cargo test test_name_here -- --nocapture
Spec Tests
Spec tests are the primary integration test format for Deno. They test CLI commands end-to-end.
How Spec Tests Work
Create a directory in tests/specs/
Add __test__.jsonc file describing test steps
Add input files and expected output files
Run tests with cargo test specs
Directory Structure
tests/specs/my_feature/
├── __test__.jsonc # Test definitions
├── main.ts # Input file
├── expected.out # Expected output
└── other_file.ts # Additional files
Creating a Spec Test
Create test directory
mkdir -p tests/specs/my_feature
cd tests/specs/my_feature
Create __test__.jsonc
{
"tests" : {
"basic_case" : {
"args" : "run main.ts" ,
"output" : "expected.out"
}
}
}
Create input files
// main.ts
console . log ( "Hello, World!" );
Create expected output
// expected.out
Hello, World!
Run the test
cargo test spec::my_feature
Test Schema
The __test__.jsonc schema is defined in tests/specs/schema.json.
{
"tests" : {
"test_name" : {
"args" : "run main.ts" ,
"output" : "expected.out"
}
}
}
{
"tests" : {
"multi_step" : {
"steps" : [
{
"args" : "cache main.ts" ,
"output" : "[WILDCARD]Download[WILDCARD]"
},
{
"args" : "run main.ts" ,
"output" : "main.out"
}
]
}
}
}
With environment variables
{
"tests" : {
"with_env" : {
"args" : "run main.ts" ,
"output" : "expected.out" ,
"envs" : {
"MY_VAR" : "value"
}
}
}
}
{
"tests" : {
"should_fail" : {
"args" : "run invalid.ts" ,
"output" : "error.out" ,
"exitCode" : 1
}
}
}
Output Assertions
Expected output supports pattern matching:
Wildcards
[WILDCARD] - Match 0+ characters (can cross newlines)
[WILDLINE] - Match 0+ characters until end of line
[WILDCHAR] - Match exactly one character
[WILDCHARS(5)] - Match exactly 5 characters
Example:
Check file://[WILDCARD]/main.ts
[WILDCARD]
Successfully compiled [WILDLINE]
Unordered Output
For non-deterministic output order:
[UNORDERED_START]
line 1
line 2
line 3
[UNORDERED_END]
These lines will match in any order.
[# This is a comment and will be ignored]
Actual output to match
[# Another comment]
Inline Output
You can specify output inline instead of in a separate file:
{
"tests" : {
"inline_output" : {
"args" : "run main.ts" ,
"output" : "Hello, World! \n "
}
}
}
Real-World Examples
Basic Run Test
Lint Test
Type Check Test
// tests/specs/run/basic/__test__.jsonc
{
"tests" : {
"simple_script" : {
"args" : "run --allow-read main.ts" ,
"output" : "main.out"
}
}
}
Unit Tests
Unit tests are written inline with Rust code using #[test] and #[cfg(test)].
Writing Unit Tests
// In your .rs file
#[cfg(test)]
mod tests {
use super ::* ;
#[test]
fn test_my_function () {
let result = my_function ( "input" );
assert_eq! ( result , "expected" );
}
#[test]
fn test_error_case () {
let result = fallible_function ();
assert! ( result . is_err ());
}
#[tokio :: test]
async fn test_async_function () {
let result = async_function () . await ;
assert! ( result . is_ok ());
}
}
Running Unit Tests
# Run all unit tests in a file
cargo test --lib
# Run tests in specific package
cargo test -p deno_core --lib
# Run specific test
cargo test tests::test_my_function --exact
Integration Tests
Additional integration tests in cli/tests/ and tests/integration/.
# Run all integration tests
cargo test --test integration
# Run specific integration test file
cargo test --test integration test_name
WPT tests verify compliance with web standards.
Running WPT Tests
# Run all WPT tests
cargo test wpt
# Run specific WPT suite
cargo test wpt_fetch
cargo test wpt_url
Location
tests/wpt/
├── runner/ # Test runner
├── suite/ # WPT test files (git submodule)
└── README.md # Documentation
Test Best Practices
Use spec tests for CLI commands
Spec tests are ideal for testing command-line behavior: {
"tests" : {
"my_command" : {
"args" : "my-command --flag input.ts" ,
"output" : "expected.out"
}
}
}
Use unit tests for logic
Test individual functions with unit tests: #[test]
fn test_parse_config () {
let config = parse_config ( "{}" );
assert! ( config . is_ok ());
}
Test both success and failure cases
#[test]
fn test_success_case () { /* ... */ }
#[test]
fn test_error_case () { /* ... */ }
Use wildcards for non-deterministic output
Downloaded [WILDCARD] packages
Time: [WILDCARD]ms
Keep tests focused and isolated
Each test should test one specific behavior.
Debugging Test Failures
Show Test Output
# Show stdout/stderr from tests
cargo test -- --nocapture
# Show output for specific test
cargo test test_name -- --nocapture
Run Single Test
# Run only one test
cargo test test_name --exact
# Run in release mode (faster)
cargo test --release test_name
Update Spec Test Output
When output format changes intentionally:
Run the test to see the actual output
Update the .out file with the new expected output
Verify the test passes
Common Issues
Spec test output mismatch
Problem: Output doesn’t match expectedSolution:
Check the test output carefully
Use [WILDCARD] for variable parts
Update .out file if output intentionally changed
Problem: Test passes sometimes, fails other timesSolution:
Use [UNORDERED_START]/[UNORDERED_END] for unordered output
Check for race conditions
Add timeouts or retries if needed
Permission errors in tests
Problem: Test fails with permission deniedSolution:
Ensure test grants necessary permissions
Check file permissions on test fixtures
Use --allow-all if appropriate for test
Benchmarks
Run benchmarks:
# Run all benchmarks
cargo bench
# Run specific benchmark
cargo bench bench_name
Profiling
# Build with profiling symbols
cargo build --release --features=profiling
# Run with profiler
perf record -g ./target/release/deno run script.ts
perf report
Continuous Integration
Tests run automatically on:
Every commit to a PR
Every merge to main
Nightly builds
CI Test Matrix
Linux (Ubuntu)
macOS
Windows
Multiple Rust versions
Running CI Tests Locally
# Format check
./tools/format.js
# Lint
./tools/lint.js
# Test
cargo test
Test Coverage
Generate Coverage Report
# Install cargo-llvm-cov
cargo install cargo-llvm-cov
# Generate coverage
cargo llvm-cov --html
# Open report
open target/llvm-cov/html/index.html
Writing Good Tests
Test Checklist
Test covers the happy path
Test covers error cases
Test is deterministic (no race conditions)
Test is isolated (doesn’t depend on other tests)
Test has clear, descriptive name
Output assertions use wildcards where appropriate
Test runs quickly (< 1 second if possible)
Next Steps
Debugging Learn debugging techniques
Code Structure Understand the codebase