AWS SQS (Simple Queue Service) integration enables you to stream WhatsApp events to SQS FIFO queues for serverless architectures, Lambda functions, and AWS-native applications.
Configuration
Configure SQS connection and event routing through environment variables.
AWS Credentials
# Enable SQS integration
SQS_ENABLED = true
# AWS credentials
SQS_ACCESS_KEY_ID = your_access_key_id
SQS_SECRET_ACCESS_KEY = your_secret_access_key
# AWS account ID
SQS_ACCOUNT_ID = 123456789012
# AWS region
SQS_REGION = us-east-1
Never commit AWS credentials to version control. Use environment variables or AWS IAM roles when running on EC2/ECS.
Global Queue Settings
# Enable global queues for all instances
SQS_GLOBAL_ENABLED = true
# Queue name prefix
SQS_GLOBAL_PREFIX_NAME = evolution
# Force all events into a single queue
SQS_GLOBAL_FORCE_SINGLE_QUEUE = false
Create individual queues for each event type: SQS_GLOBAL_ENABLED = true
SQS_GLOBAL_FORCE_SINGLE_QUEUE = false
Creates queues like:
evolution_messages_upsert.fifo
evolution_qrcode_updated.fifo
evolution_connection_update.fifo
Send all events to one queue: SQS_GLOBAL_ENABLED = true
SQS_GLOBAL_FORCE_SINGLE_QUEUE = true
Creates a single queue:
evolution_singlequeue.fifo
Each instance gets its own queues: Creates queues like:
my_instance_messages_upsert.fifo
my_instance_qrcode_updated.fifo
Available Events
Configure which events are sent to SQS:
Instance Events
Message Events
Contact Events
Chat & Group Events
Other Events
# Application lifecycle
SQS_GLOBAL_APPLICATION_STARTUP = false
# Connection events
SQS_GLOBAL_QRCODE_UPDATED = true
SQS_GLOBAL_CONNECTION_UPDATE = true
SQS_GLOBAL_LOGOUT_INSTANCE = false
SQS_GLOBAL_REMOVE_INSTANCE = false
# Message events
SQS_GLOBAL_MESSAGES_SET = true
SQS_GLOBAL_MESSAGES_UPSERT = true
SQS_GLOBAL_MESSAGES_EDITED = true
SQS_GLOBAL_MESSAGES_UPDATE = true
SQS_GLOBAL_MESSAGES_DELETE = true
SQS_GLOBAL_SEND_MESSAGE = true
# Contact events
SQS_GLOBAL_CONTACTS_SET = true
SQS_GLOBAL_CONTACTS_UPSERT = true
SQS_GLOBAL_CONTACTS_UPDATE = true
SQS_GLOBAL_PRESENCE_UPDATE = true
# Chat events
SQS_GLOBAL_CHATS_SET = true
SQS_GLOBAL_CHATS_UPSERT = true
SQS_GLOBAL_CHATS_UPDATE = true
SQS_GLOBAL_CHATS_DELETE = true
# Group events
SQS_GLOBAL_GROUPS_UPSERT = true
SQS_GLOBAL_GROUPS_UPDATE = true
SQS_GLOBAL_GROUP_PARTICIPANTS_UPDATE = true
# Label events
SQS_GLOBAL_LABELS_EDIT = true
SQS_GLOBAL_LABELS_ASSOCIATION = true
# Call events
SQS_GLOBAL_CALL = true
# Typebot integration
SQS_GLOBAL_TYPEBOT_START = false
SQS_GLOBAL_TYPEBOT_CHANGE_STATUS = false
Per-Instance Configuration
When SQS_GLOBAL_ENABLED=false, configure SQS for specific instances:
curl -X POST https://your-api.com/sqs/set/instance_name \
-H "apikey: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"sqs": {
"enabled": true,
"events": [
"MESSAGES_UPSERT",
"MESSAGES_UPDATE",
"QRCODE_UPDATED",
"CONNECTION_UPDATE"
]
}
}'
Queue Setup
Evolution API automatically creates SQS FIFO queues when enabled:
Queue Creation
Queues are created with .fifo suffix for ordered message delivery: evolution_messages_upsert.fifo
FIFO Configuration
FIFO : Ensures message ordering
Content-based deduplication : Enabled for global queues
Message deduplication ID : Set for per-instance queues
Message Grouping
Messages are grouped by:
Global: {server_name}-{event}-{instance}
Per-instance: evolution
FIFO queues ensure messages are processed in the order they’re sent, which is critical for maintaining conversation context.
Large Payload Handling
SQS has a 256 KB message size limit. Evolution API automatically handles larger payloads:
# Maximum payload size before using S3 (bytes)
SQS_MAX_PAYLOAD_SIZE = 262144
# S3 must be enabled for large payloads
S3_ENABLED = true
S3_BUCKET = evolution
S3_ACCESS_KEY = your_s3_access_key
S3_SECRET_KEY = your_s3_secret_key
S3_ENDPOINT = s3.amazonaws.com
S3_REGION = us-east-1
How It Works
Size Check
Evolution API checks if the message exceeds SQS_MAX_PAYLOAD_SIZE.
S3 Upload
If too large, the payload is uploaded to S3 as a JSON file: messages/instance_name_messages_upsert_1709550600000.json
Reference Message
A small message is sent to SQS with the S3 URL: {
"event" : "messages.upsert" ,
"instance" : "my_instance" ,
"dataType" : "s3" ,
"data" : {
"fileUrl" : "https://s3.amazonaws.com/evolution/messages/..."
}
}
Download in Consumer
Your consumer downloads the full payload from S3 when dataType: "s3".
If S3 is not enabled and a message exceeds the size limit, it will be dropped with an error logged.
Messages sent to SQS have the following structure:
{
"event" : "messages.upsert" ,
"instance" : "my_instance" ,
"dataType" : "json" ,
"data" : {
"key" : {
"remoteJid" : "[email protected] " ,
"fromMe" : false ,
"id" : "3EB0XXXXX"
},
"message" : {
"conversation" : "Hello from WhatsApp!"
},
"messageTimestamp" : 1709550600 ,
"pushName" : "John Doe"
},
"server" : "evolution" ,
"server_url" : "https://your-evolution-api.com" ,
"date_time" : "2024-03-04T10:30:00.000Z" ,
"sender" : "5511999999999" ,
"apikey" : "instance_api_key"
}
Consuming Events
Examples of consuming events from SQS:
const { SQSClient , ReceiveMessageCommand , DeleteMessageCommand } = require ( '@aws-sdk/client-sqs' );
const client = new SQSClient ({ region: 'us-east-1' });
const queueUrl = 'https://sqs.us-east-1.amazonaws.com/123456789012/evolution_messages_upsert.fifo' ;
async function pollMessages () {
while ( true ) {
const command = new ReceiveMessageCommand ({
QueueUrl: queueUrl ,
MaxNumberOfMessages: 10 ,
WaitTimeSeconds: 20 , // Long polling
VisibilityTimeout: 30
});
const response = await client . send ( command );
if ( response . Messages ) {
for ( const message of response . Messages ) {
try {
const event = JSON . parse ( message . Body );
console . log ( 'Received event:' , event . event );
console . log ( 'Instance:' , event . instance );
// Handle S3 payloads
if ( event . dataType === 's3' ) {
const fullData = await downloadFromS3 ( event . data . fileUrl );
event . data = fullData ;
}
// Process the event
await processEvent ( event );
// Delete message after successful processing
await client . send ( new DeleteMessageCommand ({
QueueUrl: queueUrl ,
ReceiptHandle: message . ReceiptHandle
}));
} catch ( error ) {
console . error ( 'Error processing message:' , error );
}
}
}
}
}
pollMessages (). catch ( console . error );
AWS Lambda Integration
Process SQS events with AWS Lambda:
exports . handler = async ( event ) => {
for ( const record of event . Records ) {
const whatsappEvent = JSON . parse ( record . body );
console . log ( 'Event:' , whatsappEvent . event );
console . log ( 'Instance:' , whatsappEvent . instance );
// Handle S3 payloads
if ( whatsappEvent . dataType === 's3' ) {
const s3Url = whatsappEvent . data . fileUrl ;
// Download from S3
const fullData = await downloadFromS3 ( s3Url );
whatsappEvent . data = fullData ;
}
// Process the event
await processWhatsAppEvent ( whatsappEvent );
}
return {
statusCode: 200 ,
body: JSON . stringify ({ processed: event . Records . length })
};
};
Configure your Lambda function with the SQS queue as an event source trigger.
Best Practices
Use Long Polling
Set WaitTimeSeconds: 20 to reduce empty responses and costs:
Set Appropriate Visibility Timeout
Ensure timeout is longer than your processing time: VisibilityTimeout : 300 // 5 minutes
Delete Messages After Processing
Always delete messages after successful processing to avoid reprocessing.
Handle Failures with Dead Letter Queue
Configure a DLQ in AWS console for messages that fail repeatedly.
Monitor Queue Depth
Set CloudWatch alarms for queue depth to detect processing issues.
FIFO queues have a limit of 3,000 messages per second with batching. For higher throughput, consider using standard queues (but lose ordering guarantees).
Troubleshooting
Verify AWS credentials have sqs:CreateQueue permission
Check AWS account limits for number of queues
Ensure queue name is valid (alphanumeric and hyphens only)
Review CloudWatch logs for detailed error messages
Messages not appearing in queue
Check that specific events are enabled in configuration
Verify SQS_ENABLED=true
Check Evolution API logs for SQS errors
Verify instance is connected to WhatsApp
Check AWS IAM permissions for sqs:SendMessage
Enable S3 integration: S3_ENABLED=true
Verify S3 credentials and bucket permissions
Check S3 bucket exists in the same region
Review Evolution API logs for S3 upload errors
Messages processed multiple times
This is normal if processing fails or visibility timeout expires. Implement idempotency using message IDs.