NanoClaw Pro uses a poll-based message routing system with conversation context accumulation and trigger-based activation.
Overview
Incoming Message Processing
1. Channel Reception
Each channel calls onMessage when messages arrive:
// From src/index.ts:483-500
const channelOpts = {
onMessage : ( chatJid : string , msg : NewMessage ) => {
// Sender allowlist filtering
if ( ! msg . is_from_me && ! msg . is_bot_message && registeredGroups [ chatJid ]) {
const cfg = loadSenderAllowlist ();
if ( shouldDropMessage ( chatJid , cfg ) &&
! isSenderAllowed ( chatJid , msg . sender , cfg )) {
return ; // Drop silently
}
}
storeMessage ( msg );
},
// ...
};
Message Storage:
Only registered groups get full message content stored
Other chats get metadata only (for group discovery)
Bot messages are filtered by is_bot_message flag
Location: src/index.ts:483-500
2. Message Loop Polling
The message loop polls SQLite every 2 seconds:
// From src/index.ts:341-440
while ( true ) {
const jids = Object . keys ( registeredGroups );
const { messages , newTimestamp } = getNewMessages (
jids ,
lastTimestamp ,
ASSISTANT_NAME ,
);
if ( messages . length > 0 ) {
// Advance cursor immediately to prevent reprocessing
lastTimestamp = newTimestamp ;
saveState ();
// Process each group's messages
for ( const [ chatJid , groupMessages ] of messagesByGroup ) {
// ... trigger check and routing
}
}
await new Promise (( resolve ) => setTimeout ( resolve , POLL_INTERVAL ));
}
Key behaviors:
Advances lastTimestamp cursor immediately (prevents duplicates on crash)
Queries only registered groups (others ignored)
Filters out bot messages (prevents feedback loops)
Groups messages by chat_jid for batch processing
Location: src/index.ts:341-440
The 2-second poll interval balances responsiveness with database load. Adjust via POLL_INTERVAL in src/config.ts.
3. Trigger Word Matching
Messages must start with the trigger pattern to activate the agent:
// From src/config.ts:61-64
export const TRIGGER_PATTERN = new RegExp (
`^@ ${ escapeRegex ( ASSISTANT_NAME ) } \\ b` ,
'i' ,
);
Examples:
Message Triggers? Reason @Andy what's the weather?✅ Starts with trigger @andy help me✅ Case insensitive Hey @Andy❌ Trigger not at start What's up?❌ No trigger word
Main group exception:
The main group (self-chat) processes ALL messages without requiring a trigger:
// From src/index.ts:388-402
const isMainGroup = group . isMain === true ;
const needsTrigger = ! isMainGroup && group . requiresTrigger !== false ;
if ( needsTrigger ) {
const hasTrigger = groupMessages . some (
( m ) => TRIGGER_PATTERN . test ( m . content . trim ()) &&
( m . is_from_me || isTriggerAllowed ( chatJid , m . sender , allowlistCfg ))
);
if ( ! hasTrigger ) continue ; // Skip non-trigger messages
}
Location: src/index.ts:388-402
4. Conversation Catch-Up
When a trigger arrives, the router fetches ALL messages since the last agent interaction:
// From src/index.ts:155-161
const sinceTimestamp = lastAgentTimestamp [ chatJid ] || '' ;
const missedMessages = getMessagesSince (
chatJid ,
sinceTimestamp ,
ASSISTANT_NAME ,
);
Why catch-up?
This provides conversation context even when the agent wasn’t mentioned:
[2:32 PM] John: hey everyone, should we do pizza tonight?
[2:33 PM] Sarah: sounds good to me
[2:34 PM] John: any dietary restrictions?
[2:35 PM] Sarah: I'm vegetarian
[2:37 PM] Mike: @Andy what toppings do you recommend?
The agent receives all 5 messages, not just Mike’s trigger message. This allows it to understand:
The conversation is about pizza
Sarah is vegetarian
They’re deciding on toppings
Location: src/index.ts:155-161
Messages are formatted with timestamps and sender names:
// From src/router.ts:13-25
export function formatMessages (
messages : NewMessage [],
timezone : string ,
) : string {
const lines = messages . map (( m ) => {
const displayTime = formatLocalTime ( m . timestamp , timezone );
return `<message sender=" ${ escapeXml ( m . sender_name ) } " time=" ${ escapeXml ( displayTime ) } "> ${ escapeXml ( m . content ) } </message>` ;
});
const header = `<context timezone=" ${ escapeXml ( timezone ) } " /> \n ` ;
return ` ${ header } <messages> \n ${ lines . join ( ' \n ' ) } \n </messages>` ;
}
Output example:
< context timezone = "America/Los_Angeles" />
< messages >
< message sender = "John" time = "Jan 31 2:32 PM" > hey everyone, should we do pizza tonight? </ message >
< message sender = "Sarah" time = "Jan 31 2:33 PM" > sounds good to me </ message >
< message sender = "Mike" time = "Jan 31 2:37 PM" > @Andy what toppings do you recommend? </ message >
</ messages >
Location: src/router.ts:13-25
XML escaping prevents injection attacks. Sender names and message content are sanitized.
Response Routing
6. Container Dispatching
Messages are routed to containers via the GroupQueue:
// From src/index.ts:415-432
if ( queue . sendMessage ( chatJid , formatted )) {
// Piped to active container
logger . debug ({ chatJid , count: messagesToSend . length },
'Piped messages to active container' );
lastAgentTimestamp [ chatJid ] =
messagesToSend [ messagesToSend . length - 1 ]. timestamp ;
saveState ();
// Show typing indicator
channel . setTyping ?.( chatJid , true );
} else {
// No active container - enqueue for new spawn
queue . enqueueMessageCheck ( chatJid );
}
Two paths:
Active container exists - Messages piped to stdin, agent continues conversation
No active container - Spawn new container, start fresh (with session resume)
Location: src/index.ts:415-432
7. Agent Processing
The agent runner inside the container processes messages:
// From src/index.ts:260-339
async function runAgent (
group : RegisteredGroup ,
prompt : string ,
chatJid : string ,
onOutput ?: ( output : ContainerOutput ) => Promise < void >,
) : Promise < 'success' | 'error' > {
const isMain = group . isMain === true ;
const sessionId = sessions [ group . folder ];
// Update snapshots for container to read
writeTasksSnapshot ( group . folder , isMain , getAllTasks ());
writeGroupsSnapshot ( group . folder , isMain , getAvailableGroups (), registeredJids );
const output = await runContainerAgent (
group ,
{ prompt , sessionId , groupFolder: group . folder , chatJid , isMain },
( proc , containerName ) => queue . registerProcess ( chatJid , proc , containerName ),
onOutput ,
);
// Save session for continuity
if ( output . newSessionId ) {
sessions [ group . folder ] = output . newSessionId ;
setSession ( group . folder , output . newSessionId );
}
return output . status === 'error' ? 'error' : 'success' ;
}
Location: src/index.ts:260-339
8. Streaming Output
Agent responses stream back via stdout markers:
// From src/container-runner.ts:324-374
container . stdout . on ( 'data' , ( data ) => {
parseBuffer += data . toString ();
// Look for OUTPUT_START_MARKER...OUTPUT_END_MARKER pairs
while (( startIdx = parseBuffer . indexOf ( OUTPUT_START_MARKER )) !== - 1 ) {
const endIdx = parseBuffer . indexOf ( OUTPUT_END_MARKER , startIdx );
if ( endIdx === - 1 ) break ; // Incomplete, wait for more data
const jsonStr = parseBuffer
. slice ( startIdx + OUTPUT_START_MARKER . length , endIdx )
. trim ();
parseBuffer = parseBuffer . slice ( endIdx + OUTPUT_END_MARKER . length );
const parsed : ContainerOutput = JSON . parse ( jsonStr );
if ( parsed . newSessionId ) newSessionId = parsed . newSessionId ;
outputChain = outputChain . then (() => onOutput ( parsed ));
}
});
Marker format:
---NANOCLAW_OUTPUT_START---
{"status":"success","result":"The weather is sunny","newSessionId":"sess_abc123"}
---NANOCLAW_OUTPUT_END---
This allows robust parsing even with debug logs mixed in stdout.
Location: src/container-runner.ts:324-374
9. Response Delivery
// From src/index.ts:207-232
const output = await runAgent ( group , prompt , chatJid , async ( result ) => {
if ( result . result ) {
const raw = typeof result . result === 'string'
? result . result
: JSON . stringify ( result . result );
// Strip <internal>...</internal> blocks
const text = raw . replace ( /<internal> [ \s\S ] *? < \/ internal>/ g , '' ). trim ();
if ( text ) {
await channel . sendMessage ( chatJid , text );
outputSentToUser = true ;
}
resetIdleTimer ();
}
if ( result . status === 'success' ) {
queue . notifyIdle ( chatJid ); // Container now idle, can accept piped messages
}
});
Internal reasoning:
Agents can use <internal> tags for reasoning that shouldn’t be sent to users:
<internal>
The user asked about weather. I should check the forecast API first.
</internal>
The weather in San Francisco is currently 65°F and sunny.
Only “The weather in San Francisco…” is delivered.
Location: src/index.ts:207-232
10. Cursor Management
After successful processing, cursors are updated:
// From src/index.ts:178-183
const previousCursor = lastAgentTimestamp [ chatJid ] || '' ;
lastAgentTimestamp [ chatJid ] =
missedMessages [ missedMessages . length - 1 ]. timestamp ;
saveState ();
Rollback on error:
// From src/index.ts:239-254
if ( output === 'error' || hadError ) {
if ( outputSentToUser ) {
// Don't roll back - user already got response
logger . warn ( 'Agent error after output, skipping rollback' );
return true ;
}
// Roll back so retry can reprocess
lastAgentTimestamp [ chatJid ] = previousCursor ;
saveState ();
return false ; // Signals queue to retry
}
This ensures:
No duplicate responses on retry
Failed messages are reprocessed
Conversation order is preserved
Location: src/index.ts:239-254
Message Piping
When a container is already running for a group, new messages can be piped to its stdin:
// From src/group-queue.ts (conceptual)
sendMessage ( chatJid : string , formatted : string ): boolean {
const state = this . activeContainers . get ( chatJid );
if ( ! state || ! state . stdinOpen ) return false ;
// Pipe formatted messages to container stdin
state . process . stdin . write ( formatted + ' \n ' );
return true ;
}
Benefits:
Faster response (no container spawn overhead)
Maintains conversation state in memory
Reuses Claude SDK session context
Container-side handling:
The agent-runner inside the container polls stdin for new messages:
// From container/agent-runner/src/index.ts (conceptual)
while ( true ) {
const line = await readStdinLine ();
if ( line ) {
// Process piped message
await runQuery ( line );
}
await sleep ( 1000 );
}
Recovery from Crashes
NanoClaw Pro recovers unprocessed messages on startup:
// From src/index.ts:446-458
function recoverPendingMessages () : void {
for ( const [ chatJid , group ] of Object . entries ( registeredGroups )) {
const sinceTimestamp = lastAgentTimestamp [ chatJid ] || '' ;
const pending = getMessagesSince ( chatJid , sinceTimestamp , ASSISTANT_NAME );
if ( pending . length > 0 ) {
logger . info (
{ group: group . name , pendingCount: pending . length },
'Recovery: found unprocessed messages' ,
);
queue . enqueueMessageCheck ( chatJid );
}
}
}
This handles the crash window between:
Advancing lastTimestamp (all messages seen)
Advancing lastAgentTimestamp[chatJid] (agent processed messages)
Location: src/index.ts:446-458
Metric Value Notes Poll interval 2 seconds Configurable via POLL_INTERVAL Message latency 2-5 seconds Poll interval + processing time Container spawn 3-8 seconds Depends on container runtime Container reuse <1 second When piping to active container Max concurrent containers 5 Configurable via MAX_CONCURRENT_CONTAINERS
Next Steps
Channel System How channels integrate with message routing
Session Management Conversation continuity and memory