Skip to main content
Deno has a comprehensive test suite including unit tests, integration tests, and web platform tests.

Test Organization

Test Types

Spec Tests

Integration tests using __test__.jsonc filesLocation: tests/specs/

Unit Tests

Inline Rust tests with source codeLocation: Inline in *.rs files

Integration Tests

Additional integration testsLocation: cli/tests/

Web Platform Tests

Standards compliance testsLocation: tests/wpt/

Running Tests

All Tests

Running all tests takes a significant amount of time (30+ minutes).
cargo test

Filtered Tests

# Filter tests by name
cargo test http_server

# Run tests matching a pattern
cargo test "fetch_*"

Package-Specific Tests

# Run tests in a specific package
cargo test -p deno_core
cargo test -p deno_runtime
cargo test -p deno_fetch

# Run CLI integration tests only
cargo test --bin deno

Spec Tests

# Run all spec tests
cargo test specs

# Run a specific spec test
cargo test spec::run::basic

# Run spec tests for a specific command
cargo test spec::lint
cargo test spec::fmt

Single Test

# Run a specific test by exact name
cargo test test_name_here --exact

# Run with output
cargo test test_name_here -- --nocapture

Spec Tests

Spec tests are the primary integration test format for Deno. They test CLI commands end-to-end.

How Spec Tests Work

  1. Create a directory in tests/specs/
  2. Add __test__.jsonc file describing test steps
  3. Add input files and expected output files
  4. Run tests with cargo test specs

Directory Structure

tests/specs/my_feature/
├── __test__.jsonc      # Test definitions
├── main.ts             # Input file
├── expected.out        # Expected output
└── other_file.ts       # Additional files

Creating a Spec Test

1

Create test directory

mkdir -p tests/specs/my_feature
cd tests/specs/my_feature
2

Create __test__.jsonc

{
  "tests": {
    "basic_case": {
      "args": "run main.ts",
      "output": "expected.out"
    }
  }
}
3

Create input files

// main.ts
console.log("Hello, World!");
4

Create expected output

// expected.out
Hello, World!
5

Run the test

cargo test spec::my_feature

Test Schema

The __test__.jsonc schema is defined in tests/specs/schema.json.
{
  "tests": {
    "test_name": {
      "args": "run main.ts",
      "output": "expected.out"
    }
  }
}
{
  "tests": {
    "multi_step": {
      "steps": [
        {
          "args": "cache main.ts",
          "output": "[WILDCARD]Download[WILDCARD]"
        },
        {
          "args": "run main.ts",
          "output": "main.out"
        }
      ]
    }
  }
}
{
  "tests": {
    "with_env": {
      "args": "run main.ts",
      "output": "expected.out",
      "envs": {
        "MY_VAR": "value"
      }
    }
  }
}
{
  "tests": {
    "should_fail": {
      "args": "run invalid.ts",
      "output": "error.out",
      "exitCode": 1
    }
  }
}

Output Assertions

Expected output supports pattern matching:

Wildcards

[WILDCARD] - Match 0+ characters (can cross newlines)
[WILDLINE] - Match 0+ characters until end of line
[WILDCHAR] - Match exactly one character
[WILDCHARS(5)] - Match exactly 5 characters
Example:
Check file://[WILDCARD]/main.ts
[WILDCARD]
Successfully compiled [WILDLINE]

Unordered Output

For non-deterministic output order:
[UNORDERED_START]
line 1
line 2
line 3
[UNORDERED_END]
These lines will match in any order.

Comments

[# This is a comment and will be ignored]
Actual output to match
[# Another comment]

Inline Output

You can specify output inline instead of in a separate file:
{
  "tests": {
    "inline_output": {
      "args": "run main.ts",
      "output": "Hello, World!\n"
    }
  }
}

Real-World Examples

// tests/specs/run/basic/__test__.jsonc
{
  "tests": {
    "simple_script": {
      "args": "run --allow-read main.ts",
      "output": "main.out"
    }
  }
}

Unit Tests

Unit tests are written inline with Rust code using #[test] and #[cfg(test)].

Writing Unit Tests

// In your .rs file
#[cfg(test)]
mod tests {
  use super::*;
  
  #[test]
  fn test_my_function() {
    let result = my_function("input");
    assert_eq!(result, "expected");
  }
  
  #[test]
  fn test_error_case() {
    let result = fallible_function();
    assert!(result.is_err());
  }
  
  #[tokio::test]
  async fn test_async_function() {
    let result = async_function().await;
    assert!(result.is_ok());
  }
}

Running Unit Tests

# Run all unit tests in a file
cargo test --lib

# Run tests in specific package
cargo test -p deno_core --lib

# Run specific test
cargo test tests::test_my_function --exact

Integration Tests

Additional integration tests in cli/tests/ and tests/integration/.
# Run all integration tests
cargo test --test integration

# Run specific integration test file
cargo test --test integration test_name

Web Platform Tests

WPT tests verify compliance with web standards.

Running WPT Tests

# Run all WPT tests
cargo test wpt

# Run specific WPT suite
cargo test wpt_fetch
cargo test wpt_url

Location

tests/wpt/
├── runner/            # Test runner
├── suite/             # WPT test files (git submodule)
└── README.md          # Documentation

Test Best Practices

1

Use spec tests for CLI commands

Spec tests are ideal for testing command-line behavior:
{
  "tests": {
    "my_command": {
      "args": "my-command --flag input.ts",
      "output": "expected.out"
    }
  }
}
2

Use unit tests for logic

Test individual functions with unit tests:
#[test]
fn test_parse_config() {
  let config = parse_config("{}");
  assert!(config.is_ok());
}
3

Test both success and failure cases

#[test]
fn test_success_case() { /* ... */ }

#[test]
fn test_error_case() { /* ... */ }
4

Use wildcards for non-deterministic output

Downloaded [WILDCARD] packages
Time: [WILDCARD]ms
5

Keep tests focused and isolated

Each test should test one specific behavior.

Debugging Test Failures

Show Test Output

# Show stdout/stderr from tests
cargo test -- --nocapture

# Show output for specific test
cargo test test_name -- --nocapture

Run Single Test

# Run only one test
cargo test test_name --exact

# Run in release mode (faster)
cargo test --release test_name

Update Spec Test Output

When output format changes intentionally:
  1. Run the test to see the actual output
  2. Update the .out file with the new expected output
  3. Verify the test passes

Common Issues

Problem: Output doesn’t match expectedSolution:
  1. Check the test output carefully
  2. Use [WILDCARD] for variable parts
  3. Update .out file if output intentionally changed
Problem: Test passes sometimes, fails other timesSolution:
  1. Use [UNORDERED_START]/[UNORDERED_END] for unordered output
  2. Check for race conditions
  3. Add timeouts or retries if needed
Problem: Test fails with permission deniedSolution:
  1. Ensure test grants necessary permissions
  2. Check file permissions on test fixtures
  3. Use --allow-all if appropriate for test

Performance Testing

Benchmarks

Run benchmarks:
# Run all benchmarks
cargo bench

# Run specific benchmark
cargo bench bench_name

Profiling

# Build with profiling symbols
cargo build --release --features=profiling

# Run with profiler
perf record -g ./target/release/deno run script.ts
perf report

Continuous Integration

Tests run automatically on:
  • Every commit to a PR
  • Every merge to main
  • Nightly builds

CI Test Matrix

  • Linux (Ubuntu)
  • macOS
  • Windows
  • Multiple Rust versions

Running CI Tests Locally

# Format check
./tools/format.js

# Lint
./tools/lint.js

# Test
cargo test

Test Coverage

Generate Coverage Report

# Install cargo-llvm-cov
cargo install cargo-llvm-cov

# Generate coverage
cargo llvm-cov --html

# Open report
open target/llvm-cov/html/index.html

Writing Good Tests

Test Checklist

  • Test covers the happy path
  • Test covers error cases
  • Test is deterministic (no race conditions)
  • Test is isolated (doesn’t depend on other tests)
  • Test has clear, descriptive name
  • Output assertions use wildcards where appropriate
  • Test runs quickly (< 1 second if possible)

Next Steps

Debugging

Learn debugging techniques

Code Structure

Understand the codebase

Build docs developers (and LLMs) love