Guardrails provide content safety, security, and compliance controls for your LLM applications. Detect and block prompt injections, PII, secrets, and custom content violations.
Get Guardrail Configuration
Retrieve the guardrail configuration for an organization.
Guardrail configuration object (null if not configured) Show Configuration Object
Whether guardrails are enabled
System rule configurations action
'block' | 'redact' | 'warn' | 'allow'
Action to take
Jailbreak attempt detection
PII (Personally Identifiable Information) detection
API keys and secrets detection
Document leakage prevention
Allowed MIME types for file uploads
piiAction
'block' | 'redact' | 'warn' | 'allow' | null
Default action for PII detection
Configuration creation date
curl https://api.llmgateway.io/guardrails/config/org_abc123 \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
"id" : "gc_xyz789" ,
"organizationId" : "org_abc123" ,
"enabled" : true ,
"systemRules" : {
"prompt_injection" : {
"enabled" : true ,
"action" : "block"
},
"jailbreak" : {
"enabled" : true ,
"action" : "block"
},
"pii_detection" : {
"enabled" : true ,
"action" : "redact"
},
"secrets" : {
"enabled" : true ,
"action" : "block"
},
"file_types" : {
"enabled" : true ,
"action" : "block"
},
"document_leakage" : {
"enabled" : false ,
"action" : "warn"
}
},
"maxFileSizeMb" : 10 ,
"allowedFileTypes" : [
"image/jpeg" ,
"image/png" ,
"image/gif" ,
"image/webp"
],
"piiAction" : "redact" ,
"createdAt" : "2024-01-10T10:00:00Z" ,
"updatedAt" : "2024-01-15T14:30:00Z"
}
Update Guardrail Configuration
Create or update the guardrail configuration.
Enable or disable all guardrails
System rule configurations action
'block' | 'redact' | 'warn' | 'allow'
Action
Jailbreak detection config
Array of allowed MIME types
piiAction
'block' | 'redact' | 'warn' | 'allow'
Default PII action
curl -X PUT https://api.llmgateway.io/guardrails/config/org_abc123 \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"enabled": true,
"systemRules": {
"prompt_injection": {
"enabled": true,
"action": "block"
},
"pii_detection": {
"enabled": true,
"action": "redact"
}
},
"maxFileSizeMb": 20,
"piiAction": "redact"
}'
{
"id" : "gc_xyz789" ,
"organizationId" : "org_abc123" ,
"enabled" : true ,
"systemRules" : {
"prompt_injection" : {
"enabled" : true ,
"action" : "block"
},
"jailbreak" : {
"enabled" : true ,
"action" : "block"
},
"pii_detection" : {
"enabled" : true ,
"action" : "redact"
},
"secrets" : {
"enabled" : true ,
"action" : "block"
},
"file_types" : {
"enabled" : true ,
"action" : "block"
},
"document_leakage" : {
"enabled" : false ,
"action" : "warn"
}
},
"maxFileSizeMb" : 20 ,
"allowedFileTypes" : [
"image/jpeg" ,
"image/png" ,
"image/gif" ,
"image/webp"
],
"piiAction" : "redact" ,
"createdAt" : "2024-01-10T10:00:00Z" ,
"updatedAt" : "2024-01-16T09:15:00Z"
}
Reset Configuration
Reset guardrail configuration to defaults.
curl -X POST https://api.llmgateway.io/guardrails/config/org_abc123/reset \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
List Custom Rules
Retrieve all custom guardrail rules for an organization.
Array of custom rule objects
curl https://api.llmgateway.io/guardrails/rules/org_abc123 \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
"rules" : [
{
"id" : "rule_123" ,
"organizationId" : "org_abc123" ,
"name" : "Block Competitor Names" ,
"type" : "blocked_terms" ,
"config" : {
"type" : "blocked_terms" ,
"terms" : [ "CompetitorA" , "CompetitorB" ],
"matchType" : "contains" ,
"caseSensitive" : false
},
"priority" : 100 ,
"enabled" : true ,
"action" : "block" ,
"createdAt" : "2024-01-10T10:00:00Z" ,
"updatedAt" : "2024-01-10T10:00:00Z"
},
{
"id" : "rule_456" ,
"organizationId" : "org_abc123" ,
"name" : "Credit Card Pattern" ,
"type" : "custom_regex" ,
"config" : {
"type" : "custom_regex" ,
"pattern" : " \\ b \\ d{4}[ \\ s-]? \\ d{4}[ \\ s-]? \\ d{4}[ \\ s-]? \\ d{4} \\ b"
},
"priority" : 90 ,
"enabled" : true ,
"action" : "redact" ,
"createdAt" : "2024-01-12T14:30:00Z" ,
"updatedAt" : "2024-01-12T14:30:00Z"
}
]
}
Create Custom Rule
Create a new custom guardrail rule.
type
'blocked_terms' | 'custom_regex' | 'topic_restriction'
required
Rule type
Rule configuration (varies by type)
Rule priority (higher = checked first)
action
'block' | 'redact' | 'warn' | 'allow'
default: "block"
Action to take on violation
Rule Types
Blocked Terms
Block or redact specific words/phrases:
{
"type" : "blocked_terms" ,
"config" : {
"type" : "blocked_terms" ,
"terms" : [ "badword1" , "badword2" ],
"matchType" : "contains" ,
"caseSensitive" : false
}
}
Match Types:
exact: Exact word match
contains: Substring match
regex: Regular expression pattern
Custom Regex
Match content using regex patterns:
{
"type" : "custom_regex" ,
"config" : {
"type" : "custom_regex" ,
"pattern" : " \\ b \\ d{3}- \\ d{2}- \\ d{4} \\ b"
}
}
Topic Restriction
Block or allow specific topics:
{
"type" : "topic_restriction" ,
"config" : {
"type" : "topic_restriction" ,
"blockedTopics" : [ "violence" , "illegal activities" ],
"allowedTopics" : [ "technology" , "education" ]
}
}
cURL - Blocked Terms
cURL - Custom Regex
curl -X POST https://api.llmgateway.io/guardrails/rules/org_abc123 \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Block Profanity",
"type": "blocked_terms",
"config": {
"type": "blocked_terms",
"terms": ["badword1", "badword2"],
"matchType": "contains",
"caseSensitive": false
},
"priority": 100,
"action": "block"
}'
{
"id" : "rule_789" ,
"organizationId" : "org_abc123" ,
"name" : "Block Profanity" ,
"type" : "blocked_terms" ,
"config" : {
"type" : "blocked_terms" ,
"terms" : [ "badword1" , "badword2" ],
"matchType" : "contains" ,
"caseSensitive" : false
},
"priority" : 100 ,
"enabled" : true ,
"action" : "block" ,
"createdAt" : "2024-01-16T10:00:00Z" ,
"updatedAt" : "2024-01-16T10:00:00Z"
}
Update Custom Rule
Update an existing custom rule.
curl -X PATCH https://api.llmgateway.io/guardrails/rules/org_abc123/rule_789 \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"enabled": false,
"priority": 110
}'
Delete Custom Rule
Delete a custom guardrail rule.
curl -X DELETE https://api.llmgateway.io/guardrails/rules/org_abc123/rule_789 \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
List Violations
Retrieve guardrail violations with filtering and pagination.
Pagination cursor from previous response
Number of results per page (1-100)
Filter by start date (ISO 8601 with timezone)
Filter by end date (ISO 8601 with timezone)
actionTaken
'blocked' | 'redacted' | 'warned'
Filter by action taken
Filter by specific rule ID
curl "https://api.llmgateway.io/guardrails/violations/org_abc123?limit=50&actionTaken=blocked" \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
"violations" : [
{
"id" : "vio_123" ,
"organizationId" : "org_abc123" ,
"logId" : "log_456" ,
"ruleId" : "system:prompt_injection" ,
"ruleName" : "Prompt Injection Detection" ,
"category" : "injection" ,
"actionTaken" : "blocked" ,
"matchedPattern" : "ignore previous instructions" ,
"matchedContent" : "Ignore previous instructions and..." ,
"contentHash" : "abc123def456" ,
"apiKeyId" : "key_xyz789" ,
"model" : "gpt-4o" ,
"createdAt" : "2024-01-16T10:30:00Z"
}
],
"pagination" : {
"nextCursor" : "vio_124" ,
"hasMore" : true ,
"limit" : 50
}
}
Get Violation Statistics
Retrieve aggregated violation statistics.
Number of days to analyze (1-90)
curl "https://api.llmgateway.io/guardrails/stats/org_abc123?days=30" \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
"blocked" : 45 ,
"redacted" : 23 ,
"warned" : 12 ,
"total" : 80
}
Test Content
Test content against guardrail rules without logging violations.
curl -X POST https://api.llmgateway.io/guardrails/test/org_abc123 \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"content": "This is test content with a badword1 in it"
}'
{
"passed" : false ,
"blocked" : true ,
"violations" : [
{
"ruleId" : "rule_789" ,
"ruleName" : "Block Profanity" ,
"category" : "blocked_terms" ,
"action" : "block" ,
"matchedPattern" : "badword1" ,
"matchedContent" : "badword1"
}
],
"rulesChecked" : 5
}
List System Rules
Get information about available system rules.
curl https://api.llmgateway.io/guardrails/system-rules \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
"rules" : [
{
"id" : "system:prompt_injection" ,
"name" : "Prompt Injection Detection" ,
"category" : "injection" ,
"defaultEnabled" : true ,
"defaultAction" : "block"
},
{
"id" : "system:jailbreak" ,
"name" : "Jailbreak Prevention" ,
"category" : "jailbreak" ,
"defaultEnabled" : true ,
"defaultAction" : "block"
},
{
"id" : "system:pii_detection" ,
"name" : "PII Detection" ,
"category" : "pii" ,
"defaultEnabled" : true ,
"defaultAction" : "redact"
},
{
"id" : "system:secrets" ,
"name" : "Secrets Detection" ,
"category" : "secrets" ,
"defaultEnabled" : true ,
"defaultAction" : "block"
},
{
"id" : "system:file_types" ,
"name" : "File Type Restrictions" ,
"category" : "files" ,
"defaultEnabled" : true ,
"defaultAction" : "block"
},
{
"id" : "system:document_leakage" ,
"name" : "Document Leakage Prevention" ,
"category" : "document_leakage" ,
"defaultEnabled" : false ,
"defaultAction" : "warn"
}
]
}
Guardrail Actions
Each rule can have one of four actions:
Block
Reject the request entirely. Returns a 400 error.
Use for:
Security threats (prompt injection, jailbreaks)
Policy violations
Prohibited content
Redact
Remove/mask the matched content and continue processing.
Use for:
PII (emails, phone numbers, SSNs)
Sensitive data
Credit card numbers
Warn
Log the violation but allow the request to proceed.
Use for:
Monitoring suspicious patterns
Low-risk content
Testing new rules
Allow
Explicitly allow content (for whitelisting).
Use for:
Overriding other rules
Approved exceptions
False positive handling
Error Responses
403 Forbidden - Not Enterprise
403 Forbidden - Wrong Role
404 Not Found
{
"message" : "Guardrails require an enterprise plan"
}
Best Practices
Start Conservative
Enable core system rules (prompt injection, jailbreak, secrets)
Use “warn” action initially to monitor false positives
Gradually tighten rules based on violation patterns
Layer Your Defense
Combine system rules with custom rules
Use both blocking and redaction
Set appropriate priorities for rule evaluation
Monitor and Iterate
Review violation logs regularly
Test new rules before enabling
Adjust actions based on false positive rates
Use test endpoint before deploying changes
Performance Impact
Guardrails add 50-200ms latency per request
More rules = higher latency
Prioritize critical rules (higher priority number)
Disable unused rules