Overview
The output command runs your solution against existing test cases and optionally saves the output to files. This is useful for generating answer files, debugging specific test cases, or preparing submissions that require output files.
Unlike other commands, output doesn’t generate random test cases—it only runs against existing test files specified by a prefix.
Basic Usage
quicktest output --target-file=main.cpp --prefix=test_cases/testcase_ac
Or using the shorter alias:
qt output -t main.cpp -p test_cases/testcase_ac
Required Parameters
Your solution file to run. Can be any supported language (C++, Java, Python, Rust, Go, C, Kotlin). Short form: -t--target-file = main.cpp
-t main.cpp
Path prefix for existing test case files. QuickTest will find all files matching this prefix pattern. Short form: -p--prefix = test_cases/testcase_ac
-p test_cases/testcase_ac
If your test files are named testcase_ac_01.txt, testcase_ac_02.txt, etc., use --prefix=testcase_ac and QuickTest will automatically find all matching files.
Optional Parameters
Maximum execution time per test case in milliseconds. Alias: --tout--timeout = 1000
--tout 1000
memory-limit
number
default: "1000000000"
Memory limit in bytes (default is 1GB). Alias: --ml--memory-limit = 512000000
--ml 512000000
Control Flags
Stop execution immediately when a TLE (Time Limit Exceeded) or RTE (Runtime Error) occurs. Aliases: --break, -b
Save the output of the target file for each test case to a file.
Examples
Run Tests Without Saving Output
Simply execute your solution on all test cases:
qt output -t main.cpp -p test_cases/testcase_ac
Generate Output Files
Create output files for each test case:
qt output -t main.cpp -p test_cases/testcase_ac --save-out
This will create output files alongside your test inputs.
Complete Example
Suppose you have these test files:
test_cases/
testcase_ac_01.txt
testcase_ac_02.txt
testcase_ac_03.txt
main.cpp
testcase_ac_01.txt
testcase_ac_02.txt
testcase_ac_03.txt
#include <bits/stdc++.h>
using namespace std ;
int main () {
int n; cin >> n;
vector < int > values (n);
for ( int & a: values)
cin >> a;
int best = 0 , sum = 0 ;
for ( int i = 0 ; i < n; i ++ ) {
sum = max ( values [i], sum + values [i]);
best = max (best, sum);
}
cout << best << " \n " ;
return 0 ;
}
Run and save outputs:
qt output -t main.cpp -p test_cases/testcase_ac --save-out
This generates:
test_cases/
testcase_ac_01.txt
testcase_ac_01.out.txt (contains: 8)
testcase_ac_02.txt
testcase_ac_02.out.txt (contains: 10)
testcase_ac_03.txt
testcase_ac_03.out.txt (contains: 0)
Using Different Prefixes
You can run different test sets separately:
# Run only accepted test cases
qt output -t main.cpp -p test_cases/testcase_ac
# Run only edge cases
qt output -t main.cpp -p test_cases/edge_case
# Run only large inputs
qt output -t main.cpp -p test_cases/large --tout 3000
With Subdirectories
qt output -t cpp/main.cpp -p test_cases/testcase_ac --save-out
Stop on First Error
Halt immediately if any test case fails:
qt output -t main.cpp -p test_cases/testcase_ac --break-bad --save-out
Custom Time Limits
For problems with strict time constraints:
qt output -t main.cpp -p test_cases/testcase_ac --tout 500 --save-out
Workflow Examples
Generate Reference Outputs
Prepare test inputs
Create your test case files with a consistent naming pattern: testcase_01.txt
testcase_02.txt
testcase_03.txt
Run output command
Generate output files: qt output -t correct.cpp -p testcase --save-out
Use outputs for validation
Now you have reference outputs to compare against: testcase_01.txt → testcase_01.out.txt
testcase_02.txt → testcase_02.out.txt
testcase_03.txt → testcase_03.out.txt
Debug Specific Test Cases
Identify failing test
After running cmp or check, you have saved test cases that failed.
Run output on that specific case
qt output -t main.cpp -p saved_tests/testcase_wa_01 --save-out
Examine the output
Review the generated output file to understand what your solution produced.
Batch Processing
Process multiple test sets:
# Generate outputs for all accepted cases
qt output -t main.cpp -p test_cases/testcase_ac --save-out
# Generate outputs for all wrong answer cases
qt output -t main.cpp -p test_cases/testcase_wa --save-out
# Generate outputs for edge cases
qt output -t main.cpp -p test_cases/edge --save-out
Common Use Cases
Answer File Generation Create expected output files for test cases
Debugging See exactly what your solution outputs for specific inputs
Regression Testing Verify outputs don’t change after refactoring
Submission Preparation Some contests require pre-generated output files
Understanding Output Files
When using --save-out, QuickTest creates output files with this naming pattern:
Input file: testcase_ac_01.txt
Output file: testcase_ac_01.out.txt
The output file contains exactly what your program printed to stdout.
Test Results
QuickTest displays status for each test case:
AC - Accepted (completed successfully)
TLE - Time Limit Exceeded
MLE - Memory Limit Exceeded
RTE - Runtime Error
Since output doesn’t compare against expected results, you won’t see WA (Wrong Answer) status. It only indicates whether your program ran successfully.
Tips
Consistent naming: Use a clear prefix pattern for your test files (e.g., testcase_, input_, test_) to make them easy to reference.
Organize by category: Group test cases into subdirectories or use different prefixes:
test_cases/small_ for small inputs
test_cases/large_ for large inputs
test_cases/edge_ for edge cases
Generate reference outputs early: Use your brute-force solution to create correct output files, then compare your optimized solution against them.
Output files overwrite: If you run with --save-out multiple times, existing .out.txt files will be overwritten without warning.
Combining with Other Commands
The output command works well with other QuickTest commands:
Use cmp to find issues:
qt cmp -t main.cpp -c correct.cpp -g gen.cpp --save-bad
Generate outputs for failed cases:
qt output -t main.cpp -p .qt/testcase_wa --save-out
Manually inspect the differences:
diff .qt/testcase_wa_01.expected.txt .qt/testcase_wa_01.out.txt
See Also
cmp - Compare solutions with correctness checking
check - Use custom checkers for validation
stress - Performance testing without output validation