k6 provides comprehensive support for testing gRPC services through the k6/net/grpc module, including unary calls, client streaming, server streaming, bidirectional streaming, and server reflection.
Getting Started
The gRPC client requires loading protocol buffer definitions before making requests.
Using Proto Files
Using Reflection
import grpc from 'k6/net/grpc' ;
import { check } from "k6" ;
const client = new grpc . Client ();
// Load proto file in init context
client . load ([], 'route_guide.proto' );
export default () => {
client . connect ( '127.0.0.1:10000' , { plaintext: true });
const response = client . invoke ( "main.FeatureExplorer/GetFeature" , {
latitude: 410248224 ,
longitude: - 747127767
});
check ( response , {
"status is OK" : ( r ) => r && r . status === grpc . StatusOK
});
console . log ( JSON . stringify ( response . message ));
client . close ();
}
The load() method must be called in the init context (outside the default function). The reflect option allows the client to discover services without proto files.
Loading Protocol Buffers
Proto Files
Protoset Files
Server Reflection
Load .proto files with optional import paths: import grpc from 'k6/net/grpc' ;
const client = new grpc . Client ();
// Load with import paths
client . load ([ './protos' , './vendor' ], 'service.proto' );
// Load multiple files
client . load ([], 'service1.proto' , 'service2.proto' );
Load compiled protoset files (serialized FileDescriptorSet): import grpc from 'k6/net/grpc' ;
const client = new grpc . Client ();
// Load protoset
client . loadProtoset ( 'compiled.protoset' );
Generate protoset files using protoc: protoc --descriptor_set_out=compiled.protoset \
--include_imports \
service.proto
Use server reflection to discover services dynamically: import grpc from 'k6/net/grpc' ;
const client = new grpc . Client ();
export default () => {
// Enable reflection in connect options
client . connect ( 'localhost:50051' , {
plaintext: true ,
reflect: true ,
});
// No load() needed - services discovered automatically
const response = client . invoke ( 'package.Service/Method' , {});
}
Connection Options
Configure gRPC connections with various options:
import grpc from 'k6/net/grpc' ;
const client = new grpc . Client ();
export default () => {
client . connect ( 'api.example.com:443' , {
// Connection security
plaintext: false , // Use TLS (default: false)
// Server reflection
reflect: true , // Enable reflection (default: false)
// Timeouts
timeout: '30s' , // Connection timeout
// TLS configuration
tls: {
cert: open ( 'client-cert.pem' ),
key: open ( 'client-key.pem' ),
cacerts: open ( 'ca-cert.pem' ),
},
// Max message sizes
maxReceiveSize: 4 * 1024 * 1024 , // 4MB
maxSendSize: 4 * 1024 * 1024 , // 4MB
});
}
Unary Calls
Make single request-response RPC calls:
import grpc from 'k6/net/grpc' ;
import { check } from 'k6' ;
const client = new grpc . Client ();
client . load ([], 'service.proto' );
export default () => {
client . connect ( 'localhost:50051' , { plaintext: true });
const request = {
latitude: 410248224 ,
longitude: - 747127767 ,
};
const response = client . invoke (
'main.FeatureExplorer/GetFeature' ,
request ,
{
metadata: {
'authorization' : 'Bearer token123' ,
'x-custom-header' : 'value' ,
},
tags: { endpoint: 'GetFeature' },
timeout: '5s' ,
}
);
check ( response , {
'status is OK' : ( r ) => r && r . status === grpc . StatusOK ,
'has feature name' : ( r ) => r . message . name !== '' ,
});
console . log ( `Feature: ${ response . message . name } ` );
client . close ();
}
Server Streaming
Receive multiple messages from the server:
import { Client , Stream } from 'k6/net/grpc' ;
import { sleep } from 'k6' ;
const COORD_FACTOR = 1e7 ;
const client = new Client ();
client . load ([], 'route_guide.proto' );
export default () => {
client . connect ( '127.0.0.1:10000' , { plaintext: true });
const stream = new Stream ( client , 'main.FeatureExplorer/ListFeatures' , null );
stream . on ( 'data' , function ( feature ) {
console . log (
`Found feature: " ${ feature . name } " at ` +
` ${ feature . location . latitude / COORD_FACTOR } , ` +
` ${ feature . location . longitude / COORD_FACTOR } `
);
});
stream . on ( 'end' , function () {
console . log ( 'All features received' );
client . close ();
});
stream . on ( 'error' , function ( e ) {
console . log ( 'Stream error:' , JSON . stringify ( e ));
});
// Send request to start streaming
stream . write ({
lo: {
latitude: 400000000 ,
longitude: - 750000000 ,
},
hi: {
latitude: 420000000 ,
longitude: - 730000000 ,
},
});
sleep ( 0.5 );
}
Client Streaming
Send multiple messages to the server:
import { Client , Stream } from 'k6/net/grpc' ;
import { sleep } from 'k6' ;
const client = new Client ();
client . load ([], 'route_guide.proto' );
const locations = [
{ latitude: 407838351 , longitude: - 746143763 },
{ latitude: 408122808 , longitude: - 743999179 },
{ latitude: 413628156 , longitude: - 749015468 },
{ latitude: 419999544 , longitude: - 740371136 },
{ latitude: 414008389 , longitude: - 743951297 },
];
export default () => {
if ( __ITER == 0 ) {
client . connect ( '127.0.0.1:10000' , { plaintext: true });
}
const stream = new Stream ( client , 'main.RouteGuide/RecordRoute' );
stream . on ( 'data' , ( stats ) => {
console . log ( 'Trip summary:' );
console . log ( ' Points:' , stats . pointCount );
console . log ( ' Features:' , stats . featureCount );
console . log ( ' Distance:' , stats . distance , 'meters' );
console . log ( ' Duration:' , stats . elapsedTime , 'seconds' );
});
stream . on ( 'error' , ( err ) => {
console . error ( 'Stream error:' , JSON . stringify ( err ));
});
stream . on ( 'end' , () => {
client . close ();
console . log ( 'Trip complete' );
});
// Send multiple locations
for ( let i = 0 ; i < 5 ; i ++ ) {
let point = locations [ Math . floor ( Math . random () * locations . length )];
console . log ( `Sending point ${ i + 1 } :` , point );
stream . write ( point );
sleep ( 0.1 );
}
// Signal end of client messages
stream . end ();
}
Bidirectional Streaming
Send and receive messages simultaneously:
import { Client , Stream } from 'k6/net/grpc' ;
import { sleep } from 'k6' ;
const client = new Client ();
client . load ([], 'chat.proto' );
export default () => {
client . connect ( 'localhost:50051' , { plaintext: true });
const stream = new Stream ( client , 'chat.ChatService/StreamChat' );
let messageCount = 0 ;
stream . on ( 'data' , ( message ) => {
console . log ( `Received: ${ message . text } ` );
messageCount ++ ;
if ( messageCount >= 5 ) {
stream . end ();
}
});
stream . on ( 'end' , () => {
console . log ( 'Chat ended' );
client . close ();
});
stream . on ( 'error' , ( e ) => {
console . error ( 'Chat error:' , e );
});
// Send messages periodically
for ( let i = 0 ; i < 5 ; i ++ ) {
stream . write ({
user: `VU- ${ __VU } ` ,
text: `Message ${ i + 1 } ` ,
timestamp: Date . now (),
});
sleep ( 1 );
}
stream . end ();
sleep ( 1 );
}
Send metadata (headers) with gRPC requests:
import grpc from 'k6/net/grpc' ;
const client = new grpc . Client ();
client . load ([], 'service.proto' );
export default () => {
client . connect ( 'localhost:50051' , { plaintext: true });
const params = {
metadata: {
'authorization' : 'Bearer eyJhbGc...' ,
'x-request-id' : `req- ${ __VU } - ${ __ITER } ` ,
'x-api-version' : 'v1' ,
},
};
const response = client . invoke (
'service.API/Method' ,
{ key: 'value' },
params
);
// Access response metadata
console . log ( 'Response headers:' , response . metadata );
console . log ( 'Response trailers:' , response . trailers );
client . close ();
}
Status Codes
Check gRPC status codes in responses:
import grpc from 'k6/net/grpc' ;
import { check } from 'k6' ;
const client = new grpc . Client ();
export default () => {
client . connect ( 'localhost:50051' , { plaintext: true });
const response = client . invoke ( 'service.API/Method' , {});
check ( response , {
'OK' : ( r ) => r . status === grpc . StatusOK ,
'not cancelled' : ( r ) => r . status !== grpc . StatusCanceled ,
'not unavailable' : ( r ) => r . status !== grpc . StatusUnavailable ,
});
// Handle specific errors
if ( response . status === grpc . StatusUnauthenticated ) {
console . error ( 'Authentication failed' );
}
client . close ();
}
Available Status Codes
grpc.StatusOK - Success (0)
grpc.StatusCanceled - Operation cancelled (1)
grpc.StatusUnknown - Unknown error (2)
grpc.StatusInvalidArgument - Invalid argument (3)
grpc.StatusDeadlineExceeded - Deadline exceeded (4)
grpc.StatusNotFound - Not found (5)
grpc.StatusAlreadyExists - Already exists (6)
grpc.StatusPermissionDenied - Permission denied (7)
grpc.StatusResourceExhausted - Resource exhausted (8)
grpc.StatusFailedPrecondition - Failed precondition (9)
grpc.StatusAborted - Aborted (10)
grpc.StatusOutOfRange - Out of range (11)
grpc.StatusUnimplemented - Unimplemented (12)
grpc.StatusInternal - Internal error (13)
grpc.StatusUnavailable - Unavailable (14)
grpc.StatusDataLoss - Data loss (15)
grpc.StatusUnauthenticated - Unauthenticated (16)
Response Object
The response object contains:
Property Description statusgRPC status code messageResponse message (parsed protobuf) metadataResponse headers trailersResponse trailers errorError information (if any)
Load Testing Example
import grpc from 'k6/net/grpc' ;
import { check } from 'k6' ;
import { Counter , Trend } from 'k6/metrics' ;
const client = new grpc . Client ();
client . load ([], 'service.proto' );
const requestDuration = new Trend ( 'grpc_request_duration' );
const successfulRequests = new Counter ( 'grpc_successful_requests' );
export const options = {
stages: [
{ duration: '1m' , target: 50 },
{ duration: '3m' , target: 100 },
{ duration: '1m' , target: 0 },
],
thresholds: {
'grpc_request_duration' : [ 'p(95)<500' ],
'grpc_successful_requests' : [ 'rate>0.95' ],
},
};
export default () => {
client . connect ( 'api.example.com:443' , {
plaintext: false ,
timeout: '10s' ,
});
const start = Date . now ();
const response = client . invoke (
'api.Service/Method' ,
{ userId: __VU , requestId: __ITER },
{
metadata: { 'authorization' : 'Bearer token' },
tags: { endpoint: 'Method' },
}
);
const duration = Date . now () - start ;
requestDuration . add ( duration );
const success = check ( response , {
'status is OK' : ( r ) => r . status === grpc . StatusOK ,
});
if ( success ) {
successfulRequests . add ( 1 );
}
client . close ();
}
Always call client.close() to properly clean up gRPC connections. Failure to close connections may lead to resource leaks.
Metrics
k6 automatically collects gRPC-specific metrics:
grpc_req_duration - Request duration
grpc_streams - Number of streams
grpc_streams_msgs_received - Messages received per stream
grpc_streams_msgs_sent - Messages sent per stream
Best Practices
Load proto files in init context to avoid recompilation on each iteration
Reuse client connections across iterations when possible
Always close clients to prevent resource leaks
Use reflection for development but prefer proto files for production
Handle all status codes for robust error handling
Set appropriate timeouts to prevent hanging requests
Use metadata for authentication tokens and request tracking
Monitor stream lifecycle events for debugging