Permission Mongo provides built-in audit logging to track all operations, access attempts, and changes to your data.
Overview
The audit logging system:
Comprehensive tracking - Log reads, writes, deletes, and permission denials
Multiple storage options - MongoDB, webhooks, or both
Async architecture - Non-blocking logging with batching and buffering
Rich context - User info, IP addresses, request IDs, latency metrics
Change tracking - Field-level change details for updates
TTL support - Automatic cleanup of old logs
Audit log structure
Each audit log entry contains:
{
"_id" : "507f1f77bcf86cd799439011" ,
"tenant_id" : "507f1f77bcf86cd799439012" ,
"user_id" : "user123" ,
"user_roles" : [ "manager" , "user" ],
"action" : "update" ,
"collection" : "documents" ,
"doc_id" : "507f1f77bcf86cd799439013" ,
"doc_version" : 3 ,
"changes" : [
{
"field" : "status" ,
"from" : "draft" ,
"to" : "published"
},
{
"field" : "published_at" ,
"from" : null ,
"to" : "2024-01-15T10:30:00Z"
}
],
"request_id" : "req_abc123" ,
"ip_address" : "192.168.1.100" ,
"user_agent" : "Mozilla/5.0 ..." ,
"timestamp" : "2024-01-15T10:30:00.123Z" ,
"latency_ms" : 45 ,
"success" : true
}
Tenant ID for multi-tenant isolation
ID of the user who performed the action
Roles assigned to the user at the time of the action
Operation type: create, read, update, delete, restore
Document ID being accessed
Document version number (if versioning is enabled)
Field-level changes for update operations (if include_changes: true)
Request correlation ID for tracing
When the operation occurred
Operation duration in milliseconds
Whether the operation succeeded
Error message if the operation failed
// AuditLog represents a stored audit log entry
type AuditLog struct {
ID string `bson:"_id" json:"id"`
TenantID string `bson:"tenant_id" json:"tenant_id"`
UserID string `bson:"user_id" json:"user_id"`
UserRoles [] string `bson:"user_roles" json:"user_roles"`
Action string `bson:"action" json:"action"`
Collection string `bson:"collection" json:"collection"`
DocID string `bson:"doc_id" json:"doc_id"`
DocVersion int `bson:"doc_version" json:"doc_version"`
Changes [] FieldChange `bson:"changes,omitempty" json:"changes,omitempty"`
RequestID string `bson:"request_id" json:"request_id"`
IPAddress string `bson:"ip_address" json:"ip_address"`
UserAgent string `bson:"user_agent" json:"user_agent"`
Timestamp time . Time `bson:"timestamp" json:"timestamp"`
LatencyMs int64 `bson:"latency_ms" json:"latency_ms"`
Success bool `bson:"success" json:"success"`
Error string `bson:"error,omitempty" json:"error,omitempty"`
}
Storage options
Configure audit log storage in your config:
MongoDB storage
Store logs in a MongoDB collection:
audit :
enabled : true
log_reads : false
log_writes : true
log_failed : true
include_changes : true
storage :
mongodb :
enabled : true
collection : "_pm_audit"
ttl_days : 90
collection
string
default: "_pm_audit"
Collection name for audit logs
Automatically delete logs older than this many days
Webhook storage
Send logs to an external webhook:
audit :
enabled : true
log_writes : true
storage :
webhook :
enabled : true
url : https://api.example.com/webhooks/audit
headers :
Authorization : "Bearer ${AUDIT_WEBHOOK_TOKEN}"
X-Tenant-ID : "${TENANT_ID}"
batch_size : 100
flush_interval_seconds : 5
HTTP headers to include with webhook requests
Number of logs to send per webhook request
Send batches at least this often (seconds)
Combined storage
Use both MongoDB and webhook:
audit :
enabled : true
log_writes : true
storage :
mongodb :
enabled : true
collection : "_pm_audit"
ttl_days : 90
webhook :
enabled : true
url : https://api.example.com/webhooks/audit
batch_size : 50
flush_interval_seconds : 10
What to log
Control which operations are logged:
audit :
enabled : true
log_reads : false # Log read operations
log_writes : true # Log create/update/delete
log_failed : true # Log failed permission checks
include_changes : true # Include field-level changes for updates
Log read operations. Warning: Can generate high volume.
Log create, update, and delete operations.
Log failed permission checks and denied access attempts.
Include field-level change details for update operations.
Async architecture
The audit logger uses an async architecture for performance:
Log events are queued in a buffered channel (10,000 capacity)
Worker goroutines read from the channel in batches
Batches are flushed to MongoDB every 100ms or when reaching 100 logs
Webhook batches are sent periodically based on configuration
Dropped logs are counted if the queue fills up
// Log logs an audit event (non-blocking)
func ( l * Logger ) Log ( ctx context . Context , event * AuditEvent ) error {
if event == nil {
return ErrInvalidEvent
}
auditLog := l . eventToLog ( event )
// Store in MongoDB if enabled (async, non-blocking)
if l . config . Storage . MongoDB . Enabled {
select {
case l . logChan <- auditLog :
// Successfully queued
metrics . AuditLogsTotal . WithLabelValues ( event . Action , "true" ). Inc ()
metrics . AuditQueueSize . Set ( float64 ( len ( l . logChan )))
default :
// Channel full, increment dropped counter
atomic . AddInt64 ( & l . droppedLogs , 1 )
metrics . AuditLogsDropped . Inc ()
metrics . AuditLogsTotal . WithLabelValues ( event . Action , "false" ). Inc ()
}
}
// Add to webhook buffer if enabled
if l . config . Storage . Webhook . Enabled {
l . buffer . Add ( auditLog )
if l . buffer . Len () >= l . config . Storage . Webhook . BatchSize {
go l . flushWebhook ( context . Background ())
}
}
return nil
}
This ensures audit logging never blocks application operations.
MongoDB worker
Workers batch logs for efficient insertion:
// mongoWorker reads from logChan in batches and flushes to MongoDB
func ( l * Logger ) mongoWorker () {
defer l . mongoWg . Done ()
batch := make ([] AuditLog , 0 , DefaultBatchSize )
ticker := time . NewTicker ( DefaultFlushInterval )
defer ticker . Stop ()
for {
select {
case logEntry , ok := <- l . logChan :
if ! ok {
// Channel closed, flush remaining logs and exit
if len ( batch ) > 0 {
l . flushMongoBatch ( batch )
}
return
}
batch = append ( batch , logEntry )
// Flush when batch is full
if len ( batch ) >= DefaultBatchSize {
l . flushMongoBatch ( batch )
batch = make ([] AuditLog , 0 , DefaultBatchSize )
}
case <- ticker . C :
// Flush on timer if there are any logs
if len ( batch ) > 0 {
l . flushMongoBatch ( batch )
batch = make ([] AuditLog , 0 , DefaultBatchSize )
}
}
}
}
Querying logs
Query audit logs programmatically:
// GetLogs retrieves audit logs matching the filter
func ( l * Logger ) GetLogs ( ctx context . Context , filter * AuditFilter , opts * QueryOpts ) ([] * AuditLog , error ) {
collection := l . config . Storage . MongoDB . Collection
if collection == "" {
collection = "_pm_audit"
}
bsonFilter := l . buildFilter ( filter )
findOpts := & store . FindOptions {
Sort : bson . D {{ Key : "timestamp" , Value : - 1 }},
}
if opts != nil {
if opts . Limit > 0 {
findOpts . Limit = opts . Limit
}
if opts . Skip > 0 {
findOpts . Skip = opts . Skip
}
}
docs , err := l . store . Find ( ctx , collection , bsonFilter , findOpts )
if err != nil {
return nil , fmt . Errorf ( "failed to query audit logs: %w " , err )
}
return l . docsToLogs ( docs ), nil
}
Filter options
filter := & audit . AuditFilter {
TenantID : "tenant123" ,
UserID : "user456" ,
Collection : "documents" ,
DocID : "507f1f77bcf86cd799439011" ,
Action : "update" ,
Success : & trueValue ,
StartTime : time . Now (). Add ( - 24 * time . Hour ),
EndTime : time . Now (),
}
logs , err := logger . GetLogs ( ctx , filter , & audit . QueryOpts {
Limit : 100 ,
SortBy : "timestamp" ,
SortAsc : false ,
})
Get logs for a document
// GetLogsByDocument retrieves audit logs for a specific document
func ( l * Logger ) GetLogsByDocument ( ctx context . Context , collectionName , docID string ) ([] * AuditLog , error ) {
return l . GetLogs ( ctx , & AuditFilter {
Collection : collectionName ,
DocID : docID ,
}, & QueryOpts {
SortBy : "timestamp" ,
SortAsc : false ,
})
}
Get logs for a user
// GetLogsByUser retrieves audit logs for a specific user
func ( l * Logger ) GetLogsByUser ( ctx context . Context , tenantID , userID string ) ([] * AuditLog , error ) {
return l . GetLogs ( ctx , & AuditFilter {
TenantID : tenantID ,
UserID : userID ,
}, & QueryOpts {
SortBy : "timestamp" ,
SortAsc : false ,
})
}
Indexes
The audit logger creates these indexes for performance:
// TTL index for automatic cleanup
{
"timestamp" : 1
}
// Tenant + user queries
{
"tenant_id" : 1 ,
"user_id" : 1
}
// Document queries
{
"collection" : 1 ,
"doc_id" : 1
}
// Action queries
{
"action" : 1
}
// Request correlation
{
"request_id" : 1
}
// Time-based queries with tenant
{
"tenant_id" : 1 ,
"timestamp" : - 1
}
// EnsureIndexes creates required indexes on the audit collection
func ( l * Logger ) EnsureIndexes ( ctx context . Context ) error {
collection := l . config . Storage . MongoDB . Collection
if collection == "" {
collection = "_pm_audit"
}
ttlDays := l . config . Storage . MongoDB . TTLDays
if ttlDays <= 0 {
ttlDays = 90
}
ttl := time . Duration ( ttlDays ) * 24 * time . Hour
indexes := [] store . IndexConfig {
// TTL index for automatic cleanup
{
Fields : [] string { "timestamp" },
Order : [] int { 1 },
TTL : & ttl ,
Name : "timestamp_ttl" ,
},
// Compound index for tenant + user queries
{
Fields : [] string { "tenant_id" , "user_id" },
Order : [] int { 1 , 1 },
Name : "tenant_user" ,
},
// Compound index for document queries
{
Fields : [] string { "collection" , "doc_id" },
Order : [] int { 1 , 1 },
Name : "collection_doc" ,
},
}
return l . store . EnsureIndexes ( ctx , collection , indexes )
}
Webhook batching
Webhooks are sent in batches for efficiency:
{
"batch_id" : "batch-1705318200000" ,
"count" : 50 ,
"timestamp" : "2024-01-15T10:30:00Z" ,
"logs" : [
{ /* audit log 1 */ },
{ /* audit log 2 */ },
// ...
]
}
The webhook endpoint should respond with:
200-299 - Success, batch processed
429 - Rate limited, will retry
400-499 (except 429) - Client error, won’t retry
500-599 - Server error, will retry
Metrics
The audit logger exposes Prometheus metrics:
audit_logs_total{action, success} - Total logs by action and outcome
audit_logs_dropped - Number of logs dropped due to full queue
audit_queue_size - Current size of the log queue
audit_batch_size - Size of batches flushed to MongoDB
Best practices
Disable read logging in production
Set log_reads: false to avoid high-volume logging that can impact performance and storage.
Configure ttl_days based on compliance requirements. Common values: 90 days (standard), 365 days (compliance), 2555 days (7 years for regulations).
Watch the audit_logs_dropped metric. If logs are being dropped, increase the channel buffer size or add more MongoDB workers.
Use webhooks for real-time alerts
Configure webhook storage to send critical events to alerting systems in near real-time.
Include changes for compliance
Set include_changes: true to maintain a detailed audit trail of what changed, who changed it, and when.
Use HTTPS and authentication headers for webhook endpoints to prevent unauthorized access to audit data.
Versioning - Track document changes over time
RBAC - Permission checks that generate audit logs
Hooks - Trigger webhooks on operations