Skip to main content
SQL Query Stress collects detailed performance metrics during load testing. Understanding these metrics is essential for identifying bottlenecks and validating optimizations.

Overview of Metrics

SQL Query Stress tracks several key performance indicators through the LoadEngine.QueryOutput class. Each metric provides insight into different aspects of query performance.

Core Metrics

Iterations Completed

What it measures: Total number of query executions that have finished.
Total Iterations = Number of Threads × Iterations per Thread
Example:
  • 10 threads × 50 iterations = 500 total iterations
This is your sample size. More iterations provide more reliable averages but take longer to complete. How to use it:
  • Ensure all expected iterations completed
  • If fewer iterations completed than expected, check for errors or timeouts
  • Use as a baseline for calculating per-iteration metrics

Active Threads

What it measures: Number of threads currently executing queries. This value updates in real-time during test execution:
  • Starts at your configured thread count
  • Decreases as threads complete their iterations
  • Returns to 0 when the test finishes
How to use it:
  • Monitor for bottlenecks (if active threads drop unexpectedly)
  • Verify thread concurrency is maintained throughout the test
  • Compare against SQL Server’s active request count

Elapsed Time

What it measures: Total wall-clock time from test start to completion. Displayed in format: HH:MM:SS.mmmm Example: 00:01:32.4567 = 1 minute, 32.4567 seconds How to use it:
  • Calculate throughput: Iterations ÷ Elapsed Seconds = Queries per Second
  • Compare test duration across different configurations
  • Factor in startup and shutdown overhead for short tests

Performance Metrics

These metrics are collected from SQL Server when statistics collection is enabled (CollectTimeStats and CollectIoStats in settings).

CPU Seconds/Iteration (Avg)

What it measures: Average CPU time consumed by SQL Server to execute each query.
// From LoadEngine.cs line 168
_totalCpuTime += output.CpuTime;
// From FormMain.cs line 464
var avgCpu = _totalTimeMessages == 0 ? 0.0 : _totalCpuTime / _totalTimeMessages / 1000;
Source: SQL Server’s SET STATISTICS TIME ON output, parsed from info messages. How to use it:
  • Lower is better (less CPU load per query)
  • High CPU time indicates:
    • Complex calculations
    • Inefficient query plans
    • Missing indexes forcing table scans
    • Scalar functions in SELECT or WHERE clauses
Optimization targets:
  • Reduce unnecessary computations
  • Replace scalar functions with inline table-valued functions
  • Add indexes to reduce scan operations
  • Simplify query logic
CPU time is reported in seconds per iteration. Multiply by your queries per second to get total CPU load.

Actual Seconds/Iteration (Avg)

What it measures: Average elapsed time as measured by SQL Server (server-side timing).
// From LoadEngine.cs line 169
_totalElapsedTime += output.ElapsedTime;
// From FormMain.cs line 465
var avgActual = _totalTimeMessages == 0 ? 0.0 : _totalElapsedTime / _totalTimeMessages / 1000;
Source: SQL Server’s SET STATISTICS TIME ON output. What it includes:
  • Query parsing and compilation
  • Execution time
  • I/O wait time
  • Lock wait time
  • Does NOT include network transfer time
How to use it:
  • Compare to CPU time to identify waiting:
    • Elapsed ≈ CPU: Query is CPU-bound
    • Elapsed >> CPU: Query is waiting (I/O, locks, etc.)
Example analysis:
CPU Time: 0.0234 seconds
Actual Time: 0.1234 seconds
Wait Time: 0.1234 - 0.0234 = 0.1000 seconds (81% waiting)
This indicates the query spends most time waiting, not processing. Investigate:
  • Disk I/O (check Logical Reads metric)
  • Lock contention (use sp_WhoIsActive)
  • Network waits
  • Resource semaphore waits (memory grants)

Client Seconds/Iteration (Avg)

What it measures: Average elapsed time as measured by the client application (SQL Query Stress).
// From LoadEngine.cs line 172
_totalTime += output.Time.TotalMilliseconds;
// From FormMain.cs line 463
var avgIterations = _totalIterations == 0 ? 0.0 : _totalTime / _totalIterations / 1000;
Source: Stopwatch timing around query execution on the client side. What it includes:
  • Everything in Actual Seconds
  • Network round-trip time
  • Client-side processing
  • Data retrieval time (if ForceDataRetrieval is enabled)
How to use it:
  • Compare to Actual Seconds to measure network latency:
    • Client - Actual = Network + Client Overhead
  • Higher Client time indicates:
    • Network latency
    • Large result sets being transferred
    • Client-side processing delays
Example:
Actual Time: 0.1234 seconds
Client Time: 0.1250 seconds
Network/Overhead: 0.0016 seconds (1.6 ms)
Low overhead is expected for local testing. Remote testing shows network impact.

Logical Reads/Iteration (Avg)

What it measures: Average number of 8KB pages read from cache or disk to execute each query.
// From LoadEngine.cs line 162
_totalLogicalReads += output.LogicalReads;
// From FormMain.cs line 466
var avgReads = _totalReadMessages == 0 ? 0.0 : _totalLogicalReads / _totalReadMessages;
Source: SQL Server’s SET STATISTICS IO ON output, parsed from info messages. What it includes:
  • Pages read from buffer cache (memory)
  • Pages read from disk
  • Includes all tables, indexes, and work tables accessed
How to use it:
  • Lower is better (fewer resources consumed)
  • High logical reads indicate:
    • Missing indexes
    • Index scans instead of index seeks
    • Table scans
    • Reading more data than necessary
Optimization targets: For a query reading 100,000 pages:
  1. Add covering indexes to eliminate key lookups
  2. Add filtered indexes for common WHERE clauses
  3. Rewrite to reduce rows scanned
  4. Partition large tables
A reduction in logical reads usually translates directly to faster query execution and lower CPU usage.
Interpreting values:
  • < 100 reads: Efficient query, likely using indexes well
  • 100-1,000 reads: Reasonable for many queries
  • 1,000-10,000 reads: May benefit from optimization
  • > 10,000 reads: Likely needs indexes or query rewrite

Error Metrics

Total Exceptions

What it measures: Number of query executions that resulted in errors. Click on the Total Exceptions field (GUI) to view a detailed breakdown:
  • Exception message
  • Count of occurrences
// From FormMain.cs line 174
if (output.E != null)
{
    _totalExceptions++;
    // Exception is tracked by message
}
Common exceptions:
  • Timeout errors: Query exceeded CommandTimeout
  • Deadlocks: Transaction deadlocked on lock resources
  • Constraint violations: Primary key or foreign key violations
  • Conversion errors: Data type mismatches
How to use it:
  • Zero exceptions is the goal
  • A few exceptions may be acceptable depending on the test scenario
  • High exception rates indicate:
    • Queries need timeout adjustments
    • Concurrency issues (deadlocks)
    • Invalid test data
    • Application logic errors

Analyzing Results

Baseline Testing

Before making any optimizations:
1

Run Baseline Test

Execute your test with current query and configuration:
Threads: 10
Iterations: 50
Results:
- CPU Time: 0.0456 sec
- Actual Time: 0.2341 sec
- Client Time: 0.2356 sec
- Logical Reads: 2,547 pages
2

Export Results

Save results to CSV or text file for comparison:
# GUI: File → Export Results → To CSV
# CLI: -r baseline.csv
3

Document Configuration

Record:
  • Query text
  • Index definitions
  • Statistics update date
  • SQL Server configuration

Comparing Before and After

After applying optimization (e.g., adding an index):
MetricBeforeAfterChange% Improvement
CPU Time (sec)0.04560.0123-0.033373% faster
Actual Time (sec)0.23410.0145-0.219694% faster
Logical Reads2,54712-2,53599.5% reduction
Exceptions000-
Interpretation: The optimization dramatically reduced I/O (logical reads down 99.5%), which reduced both CPU and elapsed time. This is a successful optimization.

Understanding Metric Relationships

CPU < Elapsed: Query is waiting for resourcesExample:
CPU Time: 0.05 sec
Elapsed Time: 0.50 sec
Waiting: 0.45 sec (90%)
Investigate:
  • High logical reads (I/O waits)
  • Lock waits (sp_WhoIsActive during test)
  • Memory grant waits
  • Parallelism waits
CPU ≈ Elapsed: Query is CPU-boundExample:
CPU Time: 0.50 sec
Elapsed Time: 0.52 sec
Waiting: 0.02 sec (4%)
Optimize:
  • Reduce computational complexity
  • Add indexes to reduce rows processed
  • Remove unnecessary sorting/aggregation
High Reads, Low Time: Data is cached in memory
Logical Reads: 10,000 pages
Actual Time: 0.05 sec
Result: Query is fast now but will slow down if data isn’t cached. Reduce reads for sustainable performance.High Reads, High Time: I/O is a bottleneck
Logical Reads: 10,000 pages
Actual Time: 2.50 sec
Result: Too much data being read. Add indexes or rewrite query.Low Reads, High Time: Locking or other waits
Logical Reads: 50 pages
Actual Time: 5.00 sec
Result: Query is blocked by other queries. Investigate concurrency.
Client >> Server: Network latency or large result sets
Actual Time: 0.05 sec
Client Time: 0.50 sec
Overhead: 0.45 sec
Causes:
  • Returning too many rows/columns
  • Network latency
  • Not using SET NOCOUNT ON
Solutions:
  • Reduce result set size
  • Add WHERE clauses
  • Select only needed columns
  • Test from a server closer to SQL Server
Client ≈ Server: Minimal network overhead
Actual Time: 0.05 sec
Client Time: 0.051 sec
Overhead: 0.001 sec
Result: Network and client-side processing are not bottlenecks.

Progressive Load Testing

Test with increasing concurrency to find breaking points:
# Test with 1, 5, 10, 25, 50, 100 threads
for threads in 1 5 10 25 50 100; do
    sqlstresscmd -s test.json -t $threads -r results.csv
done

Analyzing Scalability

Plot metrics vs. thread count:
ThreadsQueries/SecAvg Time (sec)Avg ReadsExceptions
18.50.1182450
542.00.1192450
1083.50.1202450
25185.00.1352450
50295.00.1702452
100380.00.26324515
Observations:
  • Linear scaling (1-10 threads): Throughput increases proportionally, time stays constant
  • Degrading scaling (25-50 threads): Time increases, some errors appear
  • Bottleneck (100 threads): Throughput levels off, time increases significantly, errors increase
Conclusion: System handles 25 concurrent users well but struggles beyond 50. Investigate resource constraints (CPU, memory, I/O).

Common Patterns

Pattern 1: Missing Index

Symptoms:
  • High logical reads
  • High CPU time
  • Actual time increases with threads
Before:
CPU Time: 0.234 sec
Logical Reads: 50,000 pages
After adding index:
CPU Time: 0.003 sec (-99%)
Logical Reads: 15 pages (-99.97%)

Pattern 2: Blocking/Locking

Symptoms:
  • CPU time stays low
  • Actual time increases dramatically with threads
  • Active threads drop during test
Single thread:
CPU Time: 0.05 sec
Actual Time: 0.05 sec
50 threads:
CPU Time: 0.05 sec (unchanged)
Actual Time: 5.00 sec (100x slower)
Solution: Investigate locking with sp_WhoIsActive, consider:
  • Reducing transaction scope
  • Using NOLOCK or READ UNCOMMITTED for reports
  • Optimizing update queries to reduce lock duration

Pattern 3: Parameter Sniffing

Symptoms:
  • High variance in execution times
  • Some iterations very fast, others very slow
  • Different plans for different parameter values
Solution: Use OPTION (RECOMPILE) or optimize for typical values.

Pattern 4: Network Bottleneck

Symptoms:
  • Client time >> Server time
  • Time increases with thread count
  • Low CPU and reasonable reads
Example:
Actual Time: 0.05 sec
Client Time: 0.50 sec
Solution:
  • Reduce result set size
  • Test from server closer to SQL Server
  • Compress result sets if supported

Best Practices

  1. Run Tests Multiple Times: Results can vary. Run 3-5 times and average.
  2. Clear Caches Between Tests: Use “Clear Buffers” and “Free Cache” for consistent results.
  3. Test Realistic Loads: Use thread counts that match expected production concurrency.
  4. Monitor SQL Server: Watch Performance Monitor, Extended Events, or Query Store during tests.
  5. Document Everything: Save queries, indexes, settings, and results for future comparison.
  6. Test Edge Cases: Include unusual parameter values that might trigger different plans.

Next Steps

Using the GUI

Learn how to view results in the GUI application

Using the CLI

Export results from CLI tests for analysis

Query Parameterization

Test with varying parameters to find performance issues

Connection Settings

Configure settings that affect metrics collection

Build docs developers (and LLMs) love