Skip to main content
Testing is a critical part of the Rust compiler development process. This guide covers the comprehensive testing infrastructure used in the project.
For detailed information about the compiler testing framework, see the rustc-dev-guide testing section.

Test Suite Overview

The Rust compiler project includes multiple test suites, each serving different purposes:
Test SuitePurposeLocation
uiCompile-fail and run-pass tests with error output verificationtests/ui/
run-makeComplex tests using Makefilestests/run-make/
codegenLLVM IR generation teststests/codegen-llvm/
assemblyAssembly output teststests/assembly-llvm/
mir-optMIR optimization teststests/mir-opt/
debuginfoDebug information teststests/debuginfo/
rustdocDocumentation generation teststests/rustdoc-html/, tests/rustdoc-ui/
incrementalIncremental compilation teststests/incremental/
coverageCode coverage teststests/coverage/

Running Tests

Basic Test Commands

Run the entire test suite (takes several hours):
./x.py test
Running all tests takes significant time and resources. Use more targeted approaches during development.

Test Flags and Options

# Run with specific number of threads
./x.py test tests/ui --test-args --test-threads=4

Writing UI Tests

UI tests are the most common type of test. They verify compiler output, including error messages.

Basic UI Test Structure

Create a test file in tests/ui/:
// Test for issue #12345: ICE with associated types in trait bounds

trait MyTrait {
    type Assoc;
}

fn foo<T: MyTrait<Assoc = i32>>() {
    // This should compile without ICE
}

fn main() {}

Test Directives

Use special comments to control test behavior:
// compile-flags: -O -C no-prepopulate-passes
// check-pass
// build-pass
// run-pass
// compile-fail
fn main() {
    let x: i32 = "string"; //~ ERROR mismatched types
    //~| expected `i32`, found `&str`
}
The //~ markers indicate expected error locations:
  • //~ - Error on the same line
  • //~^ - Error on the previous line
  • //~| - Additional error message
// only-x86_64
// only-windows
// ignore-emscripten
// ignore-windows
// edition:2021
// edition:2018

Updating Test Expectations

When error messages change, update expectations with “bless”:
./x.py test tests/ui/issues/issue-12345.rs --bless
This updates the .stderr or .stdout files to match the new output.
Always review blessed changes carefully to ensure they’re intentional improvements, not regressions.

Writing Run-Make Tests

For complex scenarios requiring multiple files or build steps:
1

Create Test Directory

mkdir tests/run-make/my-feature-test
2

Write Makefile

Create tests/run-make/my-feature-test/Makefile:
include ../tools.mk

all:
	$(RUSTC) --crate-type lib lib.rs
	$(RUSTC) --extern mylib=liblib.rlib main.rs
	$(call RUN,main)
3

Add Source Files

Create the necessary Rust source files in the same directory
4

Run the Test

./x.py test tests/run-make/my-feature-test

Codegen and Assembly Tests

Codegen Tests

Verify the LLVM IR generated by rustc:
// compile-flags: -O -C no-prepopulate-passes

#![crate_type = "lib"]

// CHECK-LABEL: @add_one
// CHECK: add i32 %{{.*}}, 1
// CHECK-NEXT: ret i32
#[no_mangle]
pub fn add_one(x: i32) -> i32 {
    x + 1
}

Assembly Tests

Verify the generated assembly code:
// assembly-output: emit-asm
// compile-flags: -O
// only-x86_64

#![crate_type = "lib"]

// CHECK-LABEL: multiply:
// CHECK: imul
#[no_mangle]
pub fn multiply(a: i32, b: i32) -> i32 {
    a * b
}

MIR Optimization Tests

Test the Middle Intermediate Representation (MIR) optimizations:
// EMIT_MIR const-prop.main.ConstProp.diff

fn main() {
    let x = 2 + 2;
    let y = x * 2;
}
Run and bless:
./x.py test tests/mir-opt/const-prop.rs --bless

Debugging Test Failures

Understanding Test Output

When a test fails, you’ll see:
failures:

---- [ui] tests/ui/issues/issue-12345.rs stdout ----

error: test compilation failed although it shouldn't!
status: exit code: 101
command: "/path/to/rustc" "tests/ui/issues/issue-12345.rs" ...
--stderr-------------------------------
error[E0308]: mismatched types
  --> tests/ui/issues/issue-12345.rs:10:18
   |
10 |     let x: i32 = "string";
   |            ---   ^^^^^^^^ expected `i32`, found `&str`

Common Debugging Strategies

Run just the failing test:
./x.py test tests/ui/issues/issue-12345.rs --verbose

Incremental Testing

Test incremental compilation scenarios:
// revisions: rpass1 rpass2

#[cfg(rpass1)]
struct Foo {
    x: i32,
}

#[cfg(rpass2)]
struct Foo {
    x: i32,
    y: i32,  // Added field
}

fn main() {
    let _ = Foo { x: 0 };
}
The test runs in two passes to verify incremental behavior.

Coverage Testing

Test code coverage instrumentation:
// compile-flags: -C instrument-coverage

fn main() {
    covered_function();
}

fn covered_function() {
    println!("This should be covered");
}

fn uncovered_function() {
    println!("This should not be covered");
}

CI Testing Workflow

Local CI Simulation

Before pushing, simulate what CI will run:
1

Tidy Check

./x.py test tidy
Checks formatting, license headers, and project standards
2

UI Test Suite

./x.py test tests/ui
Most likely to catch issues
3

Build Tests

./x.py test --stage 1
Full build with stage 1 compiler

CI Job Matrix

The CI runs tests on multiple platforms defined in .github/workflows/ci.yml:
  • Linux: x86_64-gnu-llvm, x86_64-gnu-tools, i686-gnu
  • Windows: x86_64-msvc, i686-msvc, x86_64-gnu
  • macOS: x86_64-darwin, aarch64-darwin
  • Cross-compilation: Various embedded and tier 2 targets
You can see the job matrix definition in src/ci/github-actions/jobs.yml, which is processed by the citool to calculate which jobs run.

Performance Testing

For changes that might impact performance:

Request Performance Run

On your PR, comment:
@rust-timer queue
This queues a performance comparison against the master branch.

Interpret Results

The bot will report:
  • Instruction count changes
  • Wall-time changes
  • Memory usage changes
  • Binary size changes
Regression of more than 1-2% in instruction counts typically needs justification or mitigation.

Test Organization Best Practices

Name Tests Clearly

Use descriptive names like issue-12345.rs or trait-bound-ice.rs

Keep Tests Minimal

Reduce test cases to the smallest reproducing example

Add Comments

Explain what the test is checking and why

Link to Issues

Reference the issue or RFC in test comments

Use Correct Suite

Put tests in the appropriate suite (ui, run-make, etc.)

Update Expectations

Run with --bless when error messages change

Common Test Patterns

Regression Tests

For bug fixes, add a regression test:
// Regression test for issue #12345
// This used to cause an ICE

// check-pass

trait MyTrait {
    type Assoc;
}

fn foo<T: MyTrait<Assoc = i32>>() {}

fn main() {}

Feature Tests

For new features (behind feature gates):
// Test that my_feature requires a feature gate

fn main() {
    my_feature_syntax(); //~ ERROR my_feature is experimental
}
And the positive test:
// Test my_feature works when enabled

// check-pass
#![feature(my_feature)]

fn main() {
    my_feature_syntax();
}

Troubleshooting

Test Won’t Run

Ensure you’re using the right stage:
# Build stage 1 first
./x.py build --stage 1

# Then test with stage 1
./x.py test --stage 1 tests/ui
Some tests only run on specific platforms. Check the directives:
// only-x86_64
// only-linux
You may need to skip these locally.
Clean and rebuild:
rm -rf build/
./x.py build

Additional Resources

rustc-dev-guide: Tests

Comprehensive testing documentation

compiletest Documentation

Details on the test harness

Adding New Tests

Step-by-step guide for new tests

CI Configuration

View the CI workflow definition

Summary

1

Choose the Right Test Suite

Use UI tests for most cases, run-make for complex scenarios
2

Write Minimal Tests

Keep tests focused and as small as possible
3

Test Locally First

Run ./x.py test on your changes before pushing
4

Update Expectations

Use --bless when error messages legitimately change
5

Monitor CI

Watch CI results and fix failures promptly

Back to Workflow

Return to the contribution workflow guide

Build docs developers (and LLMs) love