Overview
This guide will walk you through creating and running your first k6 load test. You’ll learn the basics of k6 scripting, test execution, and results interpretation.
Make sure you have k6 installed before proceeding with this guide.
Your first test
Let’s start with the simplest possible k6 test:
Create a test script
Create a new file called test.js with the following content: import http from 'k6/http' ;
export default function () {
http . get ( 'https://quickpizza.grafana.com' );
}
This script imports the HTTP module and makes a single GET request to a test website.
Run the test
Execute your test from the command line: k6 will run the test with a single virtual user for one iteration.
View the results
You’ll see output showing various metrics including:
Request duration
Response time percentiles
Data sent and received
HTTP success rate
By default, k6 runs with 1 virtual user for 1 iteration. You can change this behavior using command-line flags or test options.
Adding load configuration
Now let’s make the test more realistic by adding multiple virtual users and a test duration:
Basic load test
Run with CLI flags
import http from "k6/http" ;
import { check , sleep } from "k6" ;
// Test configuration
export const options = {
vus: 10 ,
duration: '30s' ,
};
// Simulated user behavior
export default function () {
let res = http . get ( "https://quickpizza.grafana.com" );
// Validate response status
check ( res , { "status was 200" : ( r ) => r . status == 200 });
sleep ( 1 );
}
This test runs with 10 virtual users for 30 seconds, making requests with 1-second pauses between iterations.
Using ramping stages
For more realistic load patterns, use stages to ramp traffic up and down:
import http from "k6/http" ;
import { check } from "k6" ;
export const options = {
stages: [
// Ramp-up from 1 to 5 VUs in 10s
{ duration: "10s" , target: 5 },
// Stay at rest on 5 VUs for 5s
{ duration: "5s" , target: 5 },
// Ramp-down from 5 to 0 VUs for 5s
{ duration: "5s" , target: 0 }
]
};
export default function () {
let res = http . get ( "http://httpbin.org/" );
check ( res , { "status is 200" : ( r ) => r . status === 200 });
}
Stages allow you to simulate realistic traffic patterns like gradual user onboarding, steady-state usage, and graceful shutdown.
Adding thresholds
Thresholds let you define pass/fail criteria for your tests:
import http from "k6/http" ;
import { check } from "k6" ;
export const options = {
thresholds: {
// Assert that 99% of requests finish within 3000ms
http_req_duration: [ "p(99) < 3000" ],
// Assert that 95% of requests have successful status
http_req_failed: [ "rate<0.05" ],
},
stages: [
{ duration: "30s" , target: 15 },
{ duration: "1m" , target: 15 },
{ duration: "20s" , target: 0 },
],
};
export default function () {
let res = http . get ( "https://quickpizza.grafana.com" );
check ( res , { "status was 200" : ( r ) => r . status == 200 });
}
If any threshold fails, k6 exits with a non-zero exit code. This makes thresholds perfect for CI/CD integration where you want tests to fail the build if performance degrades.
Working with multiple requests
Test multiple endpoints and use checks to validate responses:
import http from "k6/http" ;
import { check , group } from "k6" ;
export default function () {
// GET request
group ( "GET" , function () {
let res = http . get ( "http://httpbin.org/get?verb=get" );
check ( res , {
"status is 200" : ( r ) => r . status === 200 ,
"is verb correct" : ( r ) => r . json (). args . verb === "get" ,
});
});
// POST request
group ( "POST" , function () {
let res = http . post ( "http://httpbin.org/post" , { verb: "post" });
check ( res , {
"status is 200" : ( r ) => r . status === 200 ,
"is verb correct" : ( r ) => r . json (). form . verb === "post" ,
});
});
}
Use http.batch() to send multiple requests in parallel:
import { check } from 'k6' ;
import http from 'k6/http' ;
export default function () {
const responses = http . batch ([
"https://quickpizza.grafana.com/test.k6.io" ,
"https://quickpizza.grafana.com/pi.php"
]);
check ( responses [ 0 ], {
"main page 200" : res => res . status === 200 ,
});
check ( responses [ 1 ], {
"pi page 200" : res => res . status === 200 ,
"pi page has right content" : res => res . body === "3.14" ,
});
}
Batch requests are much more efficient than sequential requests when you need to test multiple endpoints simultaneously.
Working with JSON data
Send and parse JSON data in your requests:
import http from "k6/http" ;
import { check } from "k6" ;
export default function () {
// Send a JSON encoded POST request
let body = JSON . stringify ({ key: "value" });
let res = http . post (
"http://httpbin.org/post" ,
body ,
{ headers: { "Content-Type" : "application/json" }}
);
// Use JSON.parse to deserialize the JSON
let j = JSON . parse ( res . body );
// Verify response
check ( res , {
"status is 200" : ( r ) => r . status === 200 ,
"is key correct" : ( r ) => j . json . key === "value" ,
});
}
Using custom metrics
Track custom metrics specific to your application:
import http from "k6/http" ;
import { Counter , Gauge , Rate , Trend } from "k6/metrics" ;
import { check } from "k6" ;
let myCounter = new Counter ( "my_counter" );
let myGauge = new Gauge ( "my_gauge" );
let myRate = new Rate ( "my_rate" );
let myTrend = new Trend ( "my_trend" );
let maxResponseTime = 0.0 ;
export default function () {
let res = http . get ( "http://httpbin.org/" );
let passed = check ( res , { "status is 200" : ( r ) => r . status === 200 });
// Add one for number of requests
myCounter . add ( 1 );
// Set max response time seen
maxResponseTime = Math . max ( maxResponseTime , res . timings . duration );
myGauge . add ( maxResponseTime );
// Add check success or failure to keep track of rate
myRate . add ( passed );
// Keep track of TCP-connecting and TLS handshaking time
myTrend . add ( res . timings . connecting + res . timings . tls_handshaking );
}
k6 supports four types of custom metrics:
Counter : Sum of all values added
Gauge : Latest value set
Rate : Percentage of non-zero values
Trend : Time series with statistical aggregations
Understanding test results
After running a test, k6 displays detailed metrics:
data_received..................: 148 kB 4.9 kB/s
data_sent......................: 9.9 kB 330 B/s
http_req_blocked...............: avg=1.06ms min=1µs med=3µs max=101.58ms p ( 90 ) =5µs p ( 95 ) =6µs
http_req_connecting............: avg=537.86µs min=0s med=0s max=50.8ms p ( 90 ) =0s p ( 95 ) =0s
http_req_duration..............: avg=136.71ms min=104.3ms med=130.09ms max=199.42ms p ( 90 ) =171.55ms p ( 95 ) =184.96ms
http_req_failed................: 0.00% ✓ 0 ✗ 150
http_req_receiving.............: avg=209.41µs min=31µs med=139µs max=3.03ms p ( 90 ) =402.3µs p ( 95 ) =524.27µs
http_req_sending...............: avg=17.86µs min=5µs med=14µs max=148µs p ( 90 ) =28µs p ( 95 ) =35µs
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p ( 90 ) =0s p ( 95 ) =0s
http_req_waiting...............: avg=136.48ms min=104.08ms med=129.89ms max=199.24ms p ( 90 ) =171.27ms p ( 95 ) =184.7ms
http_reqs......................: 150 5.0/s
iteration_duration.............: avg=1.13s min=1.1s med=1.13s max=1.3s p ( 90 ) =1.17s p ( 95 ) =1.18s
iterations.....................: 150 5.0/s
vus............................: 5 min= 5 max= 5
vus_max........................: 5 min= 5 max= 5
Response times Look at http_req_duration percentiles (p90, p95, p99) to understand user experience.
Error rate Check http_req_failed to see the percentage of failed requests.
Throughput http_reqs shows requests per second your system can handle.
Data transfer data_received and data_sent track network usage.
Next steps
Test types Learn about different load testing patterns: smoke, load, stress, and spike tests
HTTP requests Deep dive into HTTP methods, headers, authentication, and more
Scenarios Advanced workload modeling with open/closed models and executors
Results output Export metrics to InfluxDB, Prometheus, Grafana Cloud, and more
Common testing patterns
Minimal load to verify system functionality: export const options = {
vus: 1 ,
duration: '1m' ,
};
Normal expected load: export const options = {
stages: [
{ duration: '5m' , target: 100 },
{ duration: '10m' , target: 100 },
{ duration: '5m' , target: 0 },
],
};
Push beyond normal capacity: export const options = {
stages: [
{ duration: '2m' , target: 100 },
{ duration: '5m' , target: 100 },
{ duration: '2m' , target: 200 },
{ duration: '5m' , target: 200 },
{ duration: '2m' , target: 300 },
{ duration: '5m' , target: 300 },
{ duration: '10m' , target: 0 },
],
};
Sudden traffic surge: export const options = {
stages: [
{ duration: '10s' , target: 100 },
{ duration: '1m' , target: 100 },
{ duration: '10s' , target: 1400 },
{ duration: '3m' , target: 1400 },
{ duration: '10s' , target: 100 },
{ duration: '3m' , target: 100 },
{ duration: '10s' , target: 0 },
],
};
Start with smoke tests to validate your scripts, then gradually increase load to understand your system’s behavior.