Overview of Metrics
SQL Query Stress tracks several key performance indicators through the LoadEngine.QueryOutput class. Each metric provides insight into different aspects of query performance.Core Metrics
Iterations Completed
What it measures: Total number of query executions that have finished.- 10 threads × 50 iterations = 500 total iterations
- Ensure all expected iterations completed
- If fewer iterations completed than expected, check for errors or timeouts
- Use as a baseline for calculating per-iteration metrics
Active Threads
What it measures: Number of threads currently executing queries. This value updates in real-time during test execution:- Starts at your configured thread count
- Decreases as threads complete their iterations
- Returns to 0 when the test finishes
- Monitor for bottlenecks (if active threads drop unexpectedly)
- Verify thread concurrency is maintained throughout the test
- Compare against SQL Server’s active request count
Elapsed Time
What it measures: Total wall-clock time from test start to completion. Displayed in format:HH:MM:SS.mmmm
Example: 00:01:32.4567 = 1 minute, 32.4567 seconds
How to use it:
- Calculate throughput:
Iterations ÷ Elapsed Seconds = Queries per Second - Compare test duration across different configurations
- Factor in startup and shutdown overhead for short tests
Performance Metrics
These metrics are collected from SQL Server when statistics collection is enabled (CollectTimeStats and CollectIoStats in settings).CPU Seconds/Iteration (Avg)
What it measures: Average CPU time consumed by SQL Server to execute each query.SET STATISTICS TIME ON output, parsed from info messages.
How to use it:
- Lower is better (less CPU load per query)
- High CPU time indicates:
- Complex calculations
- Inefficient query plans
- Missing indexes forcing table scans
- Scalar functions in SELECT or WHERE clauses
- Reduce unnecessary computations
- Replace scalar functions with inline table-valued functions
- Add indexes to reduce scan operations
- Simplify query logic
CPU time is reported in seconds per iteration. Multiply by your queries per second to get total CPU load.
Actual Seconds/Iteration (Avg)
What it measures: Average elapsed time as measured by SQL Server (server-side timing).SET STATISTICS TIME ON output.
What it includes:
- Query parsing and compilation
- Execution time
- I/O wait time
- Lock wait time
- Does NOT include network transfer time
- Compare to CPU time to identify waiting:
Elapsed ≈ CPU: Query is CPU-boundElapsed >> CPU: Query is waiting (I/O, locks, etc.)
- Disk I/O (check Logical Reads metric)
- Lock contention (use
sp_WhoIsActive) - Network waits
- Resource semaphore waits (memory grants)
Client Seconds/Iteration (Avg)
What it measures: Average elapsed time as measured by the client application (SQL Query Stress).- Everything in Actual Seconds
- Network round-trip time
- Client-side processing
- Data retrieval time (if ForceDataRetrieval is enabled)
- Compare to Actual Seconds to measure network latency:
Client - Actual = Network + Client Overhead
- Higher Client time indicates:
- Network latency
- Large result sets being transferred
- Client-side processing delays
Logical Reads/Iteration (Avg)
What it measures: Average number of 8KB pages read from cache or disk to execute each query.SET STATISTICS IO ON output, parsed from info messages.
What it includes:
- Pages read from buffer cache (memory)
- Pages read from disk
- Includes all tables, indexes, and work tables accessed
- Lower is better (fewer resources consumed)
- High logical reads indicate:
- Missing indexes
- Index scans instead of index seeks
- Table scans
- Reading more data than necessary
- Add covering indexes to eliminate key lookups
- Add filtered indexes for common WHERE clauses
- Rewrite to reduce rows scanned
- Partition large tables
- < 100 reads: Efficient query, likely using indexes well
- 100-1,000 reads: Reasonable for many queries
- 1,000-10,000 reads: May benefit from optimization
- > 10,000 reads: Likely needs indexes or query rewrite
Error Metrics
Total Exceptions
What it measures: Number of query executions that resulted in errors. Click on the Total Exceptions field (GUI) to view a detailed breakdown:- Exception message
- Count of occurrences
- Timeout errors: Query exceeded CommandTimeout
- Deadlocks: Transaction deadlocked on lock resources
- Constraint violations: Primary key or foreign key violations
- Conversion errors: Data type mismatches
- Zero exceptions is the goal
- A few exceptions may be acceptable depending on the test scenario
- High exception rates indicate:
- Queries need timeout adjustments
- Concurrency issues (deadlocks)
- Invalid test data
- Application logic errors
Analyzing Results
Baseline Testing
Before making any optimizations:Comparing Before and After
After applying optimization (e.g., adding an index):| Metric | Before | After | Change | % Improvement |
|---|---|---|---|---|
| CPU Time (sec) | 0.0456 | 0.0123 | -0.0333 | 73% faster |
| Actual Time (sec) | 0.2341 | 0.0145 | -0.2196 | 94% faster |
| Logical Reads | 2,547 | 12 | -2,535 | 99.5% reduction |
| Exceptions | 0 | 0 | 0 | - |
Understanding Metric Relationships
CPU vs. Elapsed Time
CPU vs. Elapsed Time
CPU < Elapsed: Query is waiting for resourcesExample:Investigate:Optimize:
- High logical reads (I/O waits)
- Lock waits (
sp_WhoIsActiveduring test) - Memory grant waits
- Parallelism waits
- Reduce computational complexity
- Add indexes to reduce rows processed
- Remove unnecessary sorting/aggregation
Logical Reads vs. Time
Logical Reads vs. Time
High Reads, Low Time: Data is cached in memoryResult: Query is fast now but will slow down if data isn’t cached. Reduce reads for sustainable performance.High Reads, High Time: I/O is a bottleneckResult: Too much data being read. Add indexes or rewrite query.Low Reads, High Time: Locking or other waitsResult: Query is blocked by other queries. Investigate concurrency.
Client vs. Server Time
Client vs. Server Time
Client >> Server: Network latency or large result setsCauses:Result: Network and client-side processing are not bottlenecks.
- Returning too many rows/columns
- Network latency
- Not using
SET NOCOUNT ON
- Reduce result set size
- Add
WHEREclauses - Select only needed columns
- Test from a server closer to SQL Server
Progressive Load Testing
Test with increasing concurrency to find breaking points:Analyzing Scalability
Plot metrics vs. thread count:| Threads | Queries/Sec | Avg Time (sec) | Avg Reads | Exceptions |
|---|---|---|---|---|
| 1 | 8.5 | 0.118 | 245 | 0 |
| 5 | 42.0 | 0.119 | 245 | 0 |
| 10 | 83.5 | 0.120 | 245 | 0 |
| 25 | 185.0 | 0.135 | 245 | 0 |
| 50 | 295.0 | 0.170 | 245 | 2 |
| 100 | 380.0 | 0.263 | 245 | 15 |
- Linear scaling (1-10 threads): Throughput increases proportionally, time stays constant
- Degrading scaling (25-50 threads): Time increases, some errors appear
- Bottleneck (100 threads): Throughput levels off, time increases significantly, errors increase
Common Patterns
Pattern 1: Missing Index
Symptoms:- High logical reads
- High CPU time
- Actual time increases with threads
Pattern 2: Blocking/Locking
Symptoms:- CPU time stays low
- Actual time increases dramatically with threads
- Active threads drop during test
sp_WhoIsActive, consider:
- Reducing transaction scope
- Using
NOLOCKorREAD UNCOMMITTEDfor reports - Optimizing update queries to reduce lock duration
Pattern 3: Parameter Sniffing
Symptoms:- High variance in execution times
- Some iterations very fast, others very slow
- Different plans for different parameter values
OPTION (RECOMPILE) or optimize for typical values.
Pattern 4: Network Bottleneck
Symptoms:- Client time >> Server time
- Time increases with thread count
- Low CPU and reasonable reads
- Reduce result set size
- Test from server closer to SQL Server
- Compress result sets if supported
Best Practices
- Run Tests Multiple Times: Results can vary. Run 3-5 times and average.
- Clear Caches Between Tests: Use “Clear Buffers” and “Free Cache” for consistent results.
- Test Realistic Loads: Use thread counts that match expected production concurrency.
- Monitor SQL Server: Watch Performance Monitor, Extended Events, or Query Store during tests.
- Document Everything: Save queries, indexes, settings, and results for future comparison.
- Test Edge Cases: Include unusual parameter values that might trigger different plans.
Next Steps
Using the GUI
Learn how to view results in the GUI application
Using the CLI
Export results from CLI tests for analysis
Query Parameterization
Test with varying parameters to find performance issues
Connection Settings
Configure settings that affect metrics collection