Permission Mongo provides efficient batch operations for creating, updating, and deleting multiple documents in a single request, with full RBAC enforcement.
Overview
Batch operation features:
Bulk create - Insert multiple documents at once
Bulk update - Update multiple documents by IDs or filter
Bulk delete - Delete multiple documents by IDs or filter
RBAC enforcement - All permissions checked automatically
Partial success - Some documents can succeed while others fail
Hooks support - Pre/post hooks executed for each document
Configurable limits - Prevent abuse with max batch sizes
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:1-38
Batch Create
Create multiple documents in a single request:
Step 1: Prepare Documents
POST /products/batch
Content-Type: application/json
Authorization: Bearer < toke n >
{
"documents" : [
{
"name" : "Widget A",
"price" : 19.99,
"category" : "widgets"
},
{
"name" : "Widget B",
"price" : 29.99,
"category" : "widgets"
},
{
"name" : "Gadget C",
"price" : 39.99,
"category" : "gadgets"
}
]
}
Step 2: Automatic Field Addition
Permission Mongo automatically adds:
tenant_id from JWT
created_by from JWT
created_at timestamp
updated_at timestamp
Each document is validated against the schema:
collections :
products :
fields :
name :
type : string
required : true
price :
type : number
required : true
min : 0
Invalid documents are reported in the response.
{
"created" : [
"507f1f77bcf86cd799439011" ,
"507f1f77bcf86cd799439012" ,
"507f1f77bcf86cd799439013"
],
"failed" : []
}
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:113-300
Batch Create with Partial Failures
Some documents can fail validation while others succeed:
Request:
POST /products/batch
{
"documents" : [
{
"name" : "Valid Product",
"price" : 10.00
},
{
"name" : "Invalid Product",
"price" : -5.00 // Violates min: 0
},
{
"price" : 20.00 // Missing required field: name
}
]
}
Response:
{
"created" : [
"507f1f77bcf86cd799439011" // First document succeeded
],
"failed" : [
{
"index" : 1 ,
"error" : {
"code" : "schema_validation" ,
"message" : "price must be at least 0"
}
},
{
"index" : 2 ,
"error" : {
"code" : "schema_validation" ,
"message" : "name is required"
}
}
]
}
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:169-249
Batch Update
Update multiple documents by IDs or filter:
Update by IDs
PUT /products/batch
Content-Type: application/json
Authorization: Bearer < toke n >
{
"ids" : [
"507f1f77bcf86cd799439011" ,
"507f1f77bcf86cd799439012" ,
"507f1f77bcf86cd799439013"
],
"update" : {
"status" : "active",
"featured" : true
}
}
Response:
{
"matched" : 3 ,
"modified" : 3 ,
"failed" : []
}
Update by Filter
PUT /products/batch
{
"filter" : {
"category" : "widgets",
"status" : "draft"
},
"update" : {
"status" : "published"
}
}
Response:
{
"matched" : 15 , // Found 15 matching documents
"modified" : 15 , // All were updated
"failed" : []
}
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:302-459
RBAC Filtering
Batch updates respect RBAC policies:
policies :
products :
editor :
actions : [ update ]
when : doc.created_by == user.id
Update attempt:
PUT /products/batch
{
"ids" : [ "id1" , "id2", "id3"], // Some owned by user, some not
"update" : { "status": "published" }
}
Result: Only documents owned by the user are updated. Others are silently filtered (not counted in matched).
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:408-412
Batch Delete
Delete multiple documents by IDs or filter:
Delete by IDs
DELETE /products/batch
Content-Type: application/json
Authorization: Bearer < toke n >
{
"ids" : [
"507f1f77bcf86cd799439011" ,
"507f1f77bcf86cd799439012"
]
}
Response:
Delete by Filter
DELETE /products/batch
{
"filter" : {
"status" : "archived",
"updated_at" : { " $lt ": "2023-01-01T00:00:00Z" }
}
}
Response:
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:461-610
Batch Size Limits
Configure maximum batch sizes to prevent abuse:
server :
max_batch_size : 1000 # Default
Request exceeding limit:
POST /products/batch
{
"documents" : [ / * 1500 documents * / ]
}
Error response:
{
"error" : {
"code" : "bad_request" ,
"message" : "Batch size exceeds maximum limit of 1000" ,
"details" : {
"max_batch_size" : 1000 ,
"requested" : 1500
}
}
}
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:161-167
Hooks Execution
Hooks are executed for each document in batch operations:
Pre-Create Hooks
hooks :
products :
pre_create :
- url : https://api.example.com/validate-product
method : POST
Batch create with 3 documents:
Hook called 3 times (once per document)
If any hook fails, that document is excluded
Other documents proceed
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:212-250
Post-Create Hooks
hooks :
products :
post_create :
- url : https://api.example.com/index-product
method : POST
Execution:
Called after successful insertion
Best-effort (failures logged but don’t affect response)
Runs asynchronously for performance
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:272-285
Versioning Support
Batch operations support versioning:
collections :
products :
versioning :
enabled : true
mode : full
Batch update behavior:
Fetch current documents
Save versions to _pm_versions collection
Apply updates
Return results
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:414-424
Error Handling
All-or-Nothing Create
By default, batch create uses partial success. For all-or-nothing:
POST /products/batch?atomic= true
{
"documents" : [
{ "name" : "Product 1", "price": 10 },
{ "name" : "Product 2", "price": -5 } // Invalid
]
}
Result: Both fail if any document is invalid (transaction behavior).
Validation Errors
Detailed errors for each failed document:
{
"created" : [],
"failed" : [
{
"index" : 0 ,
"error" : {
"code" : "schema_validation" ,
"message" : "price is required"
}
},
{
"index" : 2 ,
"error" : {
"code" : "schema_validation" ,
"message" : "name must be at least 3 characters"
}
}
]
}
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:193-206
Batch Size Recommendations
Small batches (1-100):
Fast validation
Quick failure detection
Better for interactive UIs
Medium batches (100-500):
Good balance
Efficient network usage
Reasonable error handling
Large batches (500-1000):
Maximum throughput
Higher memory usage
Longer response times
Parallel Processing
Batch operations use MongoDB’s bulk write:
// Internal implementation
insertedIDs , err := collection . InsertMany ( ctx , documents )
MongoDB handles parallelization internally.
Source: /home/daytona/workspace/source/pkg/api/handlers_batch.go:253-269
Testing Batch Operations
func TestBatchCreate ( t * testing . T ) {
token := createToken ( "user-123" , "acme-corp" , [] string { "editor" })
// Batch create 3 products
resp := batchCreateProducts ( token , map [ string ] interface {}{
"documents" : [] map [ string ] interface {}{
{ "name" : "Product 1" , "price" : 10.00 },
{ "name" : "Product 2" , "price" : 20.00 },
{ "name" : "Product 3" , "price" : 30.00 },
},
})
assert . Equal ( t , 201 , resp . StatusCode )
result := resp . JSON
created := result [ "created" ].([] interface {})
assert . Equal ( t , 3 , len ( created ))
failed := result [ "failed" ].([] interface {})
assert . Equal ( t , 0 , len ( failed ))
// Verify all products exist
for _ , id := range created {
resp := getProduct ( token , id .( string ))
assert . Equal ( t , 200 , resp . StatusCode )
}
}
func TestBatchUpdateWithRBAC ( t * testing . T ) {
token := createToken ( "user-123" , "acme-corp" , [] string { "editor" })
// Create 2 products (owned by user-123)
id1 := createProduct ( token , map [ string ] interface {}{
"name" : "Product 1" , "price" : 10 ,
})
id2 := createProduct ( token , map [ string ] interface {}{
"name" : "Product 2" , "price" : 20 ,
})
// Create 1 product as different user
otherToken := createToken ( "user-456" , "acme-corp" , [] string { "editor" })
id3 := createProduct ( otherToken , map [ string ] interface {}{
"name" : "Product 3" , "price" : 30 ,
})
// Try to update all 3 products
resp := batchUpdateProducts ( token , map [ string ] interface {}{
"ids" : [] string { id1 , id2 , id3 },
"update" : map [ string ] interface {}{
"status" : "published" ,
},
})
assert . Equal ( t , 200 , resp . StatusCode )
result := resp . JSON
assert . Equal ( t , 2 , result [ "matched" ]) // Only owned products matched
assert . Equal ( t , 2 , result [ "modified" ])
}
Best Practices
Use batch operations for bulk imports and updates
Keep batch sizes under 1000 documents
Handle partial failures gracefully in your application
Test RBAC filtering with batch operations
Use filters instead of IDs when possible (more efficient)
Monitor batch operation performance and adjust limits
Validate data before sending large batches
Use atomic=true for critical operations
Common Patterns
Pattern: Bulk Import
// Import CSV data
const products = parseCSV ( csvData );
// Process in chunks of 500
for ( let i = 0 ; i < products . length ; i += 500 ) {
const chunk = products . slice ( i , i + 500 );
const response = await fetch ( '/products/batch' , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({ documents: chunk })
});
const result = await response . json ();
console . log ( `Created: ${ result . created . length } , Failed: ${ result . failed . length } ` );
}
Pattern: Bulk Status Update
// Publish all draft products
await fetch ( '/products/batch' , {
method: 'PUT' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({
filter: { status: 'draft' , reviewed: true },
update: { status: 'published' , published_at: new Date () }
})
});
Pattern: Cleanup Old Records
// Delete archived products older than 1 year
await fetch ( '/products/batch' , {
method: 'DELETE' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({
filter: {
status: 'archived' ,
archived_at: { $lt: new Date ( Date . now () - 365 * 24 * 60 * 60 * 1000 ) }
}
})
});
Troubleshooting
Batch Size Exceeded
Error: Batch size exceeds maximum limit of 1000
Solution: Split into smaller batches or increase max_batch_size in config
Partial Failures
Issue: Some documents succeed, others fail
Solution: Check failed array in response for validation errors
RBAC Filtering
Issue: Fewer documents updated than expected
Solution: Verify user has permission to access all target documents
Issue: Batch operations slow with large batches
Solution: Reduce batch size, add indexes, check hook performance
Next Steps
Schema Definition Define your data models
RBAC Policies Configure access control