Overview
Osmium Chat Protocol uses a request/response pattern with real-time updates . This hybrid approach allows clients to make explicit requests while also receiving asynchronous notifications about events happening in real-time.
Message Flow Architecture
Responses may arrive out of order . The server doesn’t guarantee that response order matches request order. Always use req_id to match responses to requests.
Request ID Management
Client Request IDs
Every ClientMessage includes an id field that uniquely identifies the request:
message ClientMessage {
uint32 id = 1 ; // Client-assigned request ID
oneof message { /* ... */ }
}
class ProtocolClient {
constructor () {
this . nextRequestId = 1 ;
}
sendRequest ( message ) {
const id = this . nextRequestId ++ ;
return this . send ({ id , ... message });
}
}
Pros: Simple, predictable, easy to debug
Cons: Resets on reconnection, may conflict with other clients on same connection
class ProtocolClient {
sendRequest ( message ) {
const id = Math . floor ( Math . random () * 0xFFFFFFFF );
return this . send ({ id , ... message });
}
}
Pros: No collision risk across reconnections
Cons: Harder to debug, tiny collision chance
class ProtocolClient {
constructor () {
this . requestCounter = 0 ;
}
sendRequest ( message ) {
// Use lower 20 bits for timestamp, upper 12 for counter
const timestamp = Date . now () & 0xFFFFF ;
const counter = ( this . requestCounter ++ ) & 0xFFF ;
const id = ( timestamp << 12 ) | counter ;
return this . send ({ id , ... message });
}
}
Pros: Unique across reconnections, sortable by time
Cons: More complex, still possible collision on rapid requests
Server Message IDs
The server assigns its own sequential IDs to each ServerMessage:
message ServerMessage {
uint32 id = 1 ; // Server-assigned message ID
oneof message { /* ... */ }
}
These IDs are used for:
Ordering : Messages can be ordered by ID
Gap detection : Missing IDs indicate dropped messages
Acknowledgment : Some protocols require acknowledging received message IDs
Response Matching
When the server responds to a request, it includes the original request ID:
message RPCResult {
uint32 req_id = 1 ; // References ClientMessage.id
oneof result { /* ... */ }
}
Implementation Pattern
class ProtocolClient {
constructor () {
this . pendingRequests = new Map ();
this . nextRequestId = 1 ;
}
async sendRequest ( message ) {
const id = this . nextRequestId ++ ;
return new Promise (( resolve , reject ) => {
// Store the pending request
this . pendingRequests . set ( id , { resolve , reject , sentAt: Date . now () });
// Send the message
this . send ({ id , ... message });
// Set timeout
setTimeout (() => {
if ( this . pendingRequests . has ( id )) {
this . pendingRequests . delete ( id );
reject ( new Error ( 'Request timeout' ));
}
}, 30000 ); // 30 second timeout
});
}
handleServerMessage ( serverMessage ) {
if ( serverMessage . result ) {
const { req_id } = serverMessage . result ;
const pending = this . pendingRequests . get ( req_id );
if ( pending ) {
this . pendingRequests . delete ( req_id );
if ( serverMessage . result . error ) {
pending . reject ( serverMessage . result . error );
} else {
pending . resolve ( serverMessage . result );
}
}
} else if ( serverMessage . update ) {
// Handle real-time update separately
this . handleUpdate ( serverMessage . update );
}
}
}
Error Handling
Error Response Structure
When an RPC call fails, the server returns an RPCError:
message RPCError {
uint32 error_code = 1 ;
string error_message = 2 ;
}
Common Error Codes
Invalid request format or missing required fields
Authentication required or invalid session
Authenticated but lacking permissions for this operation
Requested resource doesn’t exist (channel, message, user, etc.)
Operation conflicts with current state (e.g., username taken)
Too many requests, client should back off
Server-side error occurred
Error Handling Pattern
try {
const result = await client . sendRequest ({
messages_send_message: {
chat_ref: { channel: { community_id: 123 , channel_id: 456 } },
message: "Hello!"
}
});
console . log ( 'Message sent:' , result . sent_message );
} catch ( error ) {
if ( error . error_code ) {
// RPCError from server
switch ( error . error_code ) {
case 403 :
console . error ( 'No permission to send messages' );
break ;
case 429 :
console . error ( 'Rate limited, retry after delay' );
break ;
case 404 :
console . error ( 'Channel not found' );
break ;
default :
console . error ( `Server error: ${ error . error_message } ` );
}
} else {
// Network or timeout error
console . error ( 'Request failed:' , error . message );
}
}
Request Lifecycle States
Complete Lifecycle Implementation
class Request {
constructor ( id , message , options = {}) {
this . id = id ;
this . message = message ;
this . state = 'pending' ;
this . attempts = 0 ;
this . maxAttempts = options . maxAttempts || 3 ;
this . timeout = options . timeout || 30000 ;
this . retryDelay = options . retryDelay || 1000 ;
}
async execute ( client ) {
this . attempts ++ ;
this . state = 'pending' ;
try {
const result = await client . sendRequestWithTimeout ( this , this . timeout );
this . state = 'completed' ;
return result ;
} catch ( error ) {
if ( error . message === 'Request timeout' && this . attempts < this . maxAttempts ) {
this . state = 'retry' ;
await new Promise ( resolve => setTimeout ( resolve , this . retryDelay * this . attempts ));
return this . execute ( client );
} else {
this . state = 'failed' ;
throw error ;
}
}
}
cancel () {
this . state = 'cancelled' ;
}
}
Concurrent Requests
Clients can send multiple requests without waiting for responses:
// Send multiple requests in parallel
const [ history , members , channels ] = await Promise . all ([
client . sendRequest ({
messages_get_history: {
chat_ref: { channel: { community_id: 123 , channel_id: 456 } },
limit: 50
}
}),
client . sendRequest ({
communities_get_channel_members: {
community_id: 123 ,
channel_id: 456
}
}),
client . sendRequest ({
communities_get_channels: {
community_id: 123
}
})
]);
The server does not guarantee response order. Even if you send requests A, B, C in that order, you might receive responses in order C, A, B. Always use req_id matching.
Message Flow Best Practices
Request Deduplication Track pending requests to avoid sending duplicates. If a request is already pending, return the existing promise instead of sending a new request.
Timeout Strategy Use reasonable timeouts (15-30 seconds) and implement retry logic for idempotent operations.
Backpressure Handling If many requests are pending, consider queuing new requests instead of overwhelming the connection.
Error Recovery Implement exponential backoff for retries and don’t retry non-idempotent operations automatically.
Advanced Patterns
Request Queuing with Priority
class PriorityQueue {
constructor ( client ) {
this . client = client ;
this . queues = {
high: [],
normal: [],
low: []
};
this . processing = false ;
}
async enqueue ( request , priority = 'normal' ) {
return new Promise (( resolve , reject ) => {
this . queues [ priority ]. push ({ request , resolve , reject });
this . process ();
});
}
async process () {
if ( this . processing ) return ;
this . processing = true ;
while ( true ) {
const item = this . queues . high . shift () ||
this . queues . normal . shift () ||
this . queues . low . shift ();
if ( ! item ) break ;
try {
const result = await this . client . sendRequest ( item . request );
item . resolve ( result );
} catch ( error ) {
item . reject ( error );
}
}
this . processing = false ;
}
}
Request Batching
class BatchingClient {
constructor ( client , batchWindow = 10 ) {
this . client = client ;
this . batchWindow = batchWindow ; // ms
this . pendingBatch = [];
this . batchTimer = null ;
}
async batchableRequest ( request ) {
return new Promise (( resolve , reject ) => {
this . pendingBatch . push ({ request , resolve , reject });
if ( ! this . batchTimer ) {
this . batchTimer = setTimeout (() => this . flushBatch (), this . batchWindow );
}
});
}
async flushBatch () {
const batch = this . pendingBatch . splice ( 0 );
this . batchTimer = null ;
// Send all requests in parallel
const results = await Promise . allSettled (
batch . map (({ request }) => this . client . sendRequest ( request ))
);
// Resolve/reject individual promises
results . forEach (( result , i ) => {
if ( result . status === 'fulfilled' ) {
batch [ i ]. resolve ( result . value );
} else {
batch [ i ]. reject ( result . reason );
}
});
}
}
Client-Server Communication Learn about the core message structures and RPC system
Real-time Updates Understand how server-pushed updates work
Error Handling Complete reference of error codes and handling strategies