Testing is a critical part of OpenJDK development. The JDK includes multiple test frameworks and thousands of tests to ensure quality and prevent regressions. This guide covers how to run tests, interpret results, and write new tests.
The easiest way to run tests is using the make test framework:
# Run tier1 tests (recommended minimum for all changes)make test-tier1# Run all tests (takes hours)make test# Run specific test groupmake test TEST=jdk_lang
All contributors should run at least tier1 tests before submitting changes. These tests cover core functionality and run in 15-30 minutes on typical hardware.
# Set concurrency levelmake test TEST=tier1 JTREG="JOBS=8"# Increase timeout factor for slow machinesmake test TEST=tier1 JTREG="TIMEOUT_FACTOR=8"# Run in verbose modemake test TEST=tier1 JTREG="VERBOSE=all"# Pass Java options to testsmake test TEST=tier1 JTREG="JAVA_OPTIONS=-Xmx2g -Xlog:gc"
Common JTReg Options
JOBS=<n> - Test concurrency level (default: number of CPUs)
TIMEOUT_FACTOR=<n> - Multiply timeouts by factor (default: 4)
JAVA_OPTIONS=<options> - Java options for test classes
VM_OPTIONS=<options> - Options for compiling and running classes
# Run all GTest testsmake test TEST=gtest# Run specific testmake test TEST=gtest:LogDecorations# Run test repeatedly (useful for debugging intermittent failures)make test TEST=gtest:LogDecorations GTEST="REPEAT=-1"# Run with specific JVM variantmake test TEST=gtest:all/server
GTest requires configuring the build with gtest support. See the Building Guide for details.
# Run microbenchmarks matching a patternmake test TEST=micro:java.lang.String# Run with specific parametersmake test TEST=micro:StringConcat MICRO="FORK=1;WARMUP_ITER=5;ITER=10"# Run reflection benchmarksmake test TEST=micro:java.lang.reflect
Microbenchmark Options
FORK=<n> - Number of benchmark forks
ITER=<n> - Measurement iterations per fork
WARMUP_ITER=<n> - Warmup iterations before measurement
# Java language testsmake test TEST=jdk_lang# Utilities testsmake test TEST=jdk_util# Garbage collection testsmake test TEST=hotspot_gc# Security testsmake test TEST=jdk_security# Networking testsmake test TEST=jdk_net
# Test a specific directorymake test TEST=test/jdk/java/util/concurrent# Test a single filemake test TEST=test/jdk/java/lang/String/StringTest.java# Multiple testsmake test TEST="test/jdk/java/lang/String/StringTest.java test/jdk/java/lang/Integer/IntegerTest.java"
# Run tests with specific tagsmake test TEST=jtreg:test/hotspot:hotspot_gc# Multiple test rootsmake test TEST="jtreg:test/jdk:tier1 jtreg:test/hotspot:tier1"
# Main results directorybuild/<config>/test-results/# Individual test resultsbuild/<config>/test-results/<test-id>/# Example: JTReg tier1 resultsbuild/linux-x64/test-results/jtreg_test_jdk_tier1/# Work files and logsbuild/<config>/test-support/<test-id>/
# View the test reportcat build/<config>/test-results/<test-id>/text/stats.txt# Check failed test logsless build/<config>/test-results/<test-id>/text/newfail.txt
2
Examine Work Directory
# JTReg work directory contains detailed logscd build/<config>/test-support/<test-id>/work# Each test has a .jtr file with detailsless <TestName>.jtr
3
Rerun Failed Tests
# Use test-only to skip rebuildmake test-only TEST=<path-to-specific-test># Add debugging optionsmake test TEST=<test> JTREG="JAVA_OPTIONS=-Xlog:all=debug"
# Docker tests may need specific image configurationmake test TEST="jtreg:test/hotspot/jtreg/containers/docker" \ JTREG="JAVA_OPTIONS=-Djdk.test.docker.image.name=ubuntu \ -Djdk.test.docker.image.version=latest"
# May require alternative NSS library locationmake test TEST="jtreg:sun/security/pkcs11" \ JTREG="JAVA_OPTIONS=-Djdk.test.lib.artifacts.nsslib-linux_aarch64=/path/to/NSS-libs"
# Set US locale for consistent test behaviorexport LANG="en_US"make test TEST=tier1# Or use JVM optionsmake test JTREG="VM_OPTIONS=-Duser.language=en -Duser.country=US" TEST=tier1
# Repeat tests to find intermittent failuresmake test TEST=<test> JTREG="REPEAT_COUNT=100"# For GTestmake test TEST=gtest:<test> GTEST="REPEAT=-1"# Run until first failuremake test TEST=<test> JTREG="RETRY_COUNT=0"
Tests with known issues are tracked in problem lists:
# Each test root has a ProblemList.txttest/jdk/ProblemList.txttest/hotspot/jtreg/ProblemList.txt# Use additional problem listsmake test TEST=tier1 JTREG="EXTRA_PROBLEM_LISTS=/path/to/MyProblemList.txt"# Run only problem-listed testsmake test TEST=tier1 JTREG="RUN_PROBLEM_LISTS=true"
# Maximum verbositymake test TEST=<test> JTREG="VERBOSE=all"# Retain all test datamake test TEST=<test> JTREG="RETAIN=all"# Add Java loggingmake test TEST=<test> JTREG="JAVA_OPTIONS=-Xlog:all=debug:file=/tmp/test.log"
# Run test with debugger optionsmake test TEST=<test> JTREG="JAVA_OPTIONS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"# Then attach your debugger to port 5005
# Run benchmarks with proper warmupmake test TEST=micro:java.lang.String \ MICRO="FORK=3;WARMUP_ITER=10;ITER=10"# Save results for comparisonmake test TEST=micro:java.lang.String \ MICRO="RESULTS_FORMAT=json"# Results saved to:cat build/<config>/test-results/micro_*/results.json
# Baseline runmake test TEST=micro:StringConcat MICRO="RESULTS_FORMAT=json"cp build/results.json baseline.json# After changesmake test TEST=micro:StringConcat MICRO="RESULTS_FORMAT=json"# Compare baseline.json with new results.json