Skip to main content
Guardrails are an Enterprise-only feature. Contact [email protected] to upgrade.
Guardrails provide content safety, security, and compliance controls for your LLM applications. Detect and block prompt injections, PII, secrets, and custom content violations.

Get Guardrail Configuration

Retrieve the guardrail configuration for an organization.
organizationId
string
required
Organization ID
config
object | null
Guardrail configuration object (null if not configured)
curl https://api.llmgateway.io/guardrails/config/org_abc123 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
  "id": "gc_xyz789",
  "organizationId": "org_abc123",
  "enabled": true,
  "systemRules": {
    "prompt_injection": {
      "enabled": true,
      "action": "block"
    },
    "jailbreak": {
      "enabled": true,
      "action": "block"
    },
    "pii_detection": {
      "enabled": true,
      "action": "redact"
    },
    "secrets": {
      "enabled": true,
      "action": "block"
    },
    "file_types": {
      "enabled": true,
      "action": "block"
    },
    "document_leakage": {
      "enabled": false,
      "action": "warn"
    }
  },
  "maxFileSizeMb": 10,
  "allowedFileTypes": [
    "image/jpeg",
    "image/png",
    "image/gif",
    "image/webp"
  ],
  "piiAction": "redact",
  "createdAt": "2024-01-10T10:00:00Z",
  "updatedAt": "2024-01-15T14:30:00Z"
}

Update Guardrail Configuration

Create or update the guardrail configuration.
organizationId
string
required
Organization ID
enabled
boolean
Enable or disable all guardrails
systemRules
object
System rule configurations
maxFileSizeMb
number
Maximum file size in MB
allowedFileTypes
string[]
Array of allowed MIME types
piiAction
'block' | 'redact' | 'warn' | 'allow'
Default PII action
curl -X PUT https://api.llmgateway.io/guardrails/config/org_abc123 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "enabled": true,
    "systemRules": {
      "prompt_injection": {
        "enabled": true,
        "action": "block"
      },
      "pii_detection": {
        "enabled": true,
        "action": "redact"
      }
    },
    "maxFileSizeMb": 20,
    "piiAction": "redact"
  }'
{
  "id": "gc_xyz789",
  "organizationId": "org_abc123",
  "enabled": true,
  "systemRules": {
    "prompt_injection": {
      "enabled": true,
      "action": "block"
    },
    "jailbreak": {
      "enabled": true,
      "action": "block"
    },
    "pii_detection": {
      "enabled": true,
      "action": "redact"
    },
    "secrets": {
      "enabled": true,
      "action": "block"
    },
    "file_types": {
      "enabled": true,
      "action": "block"
    },
    "document_leakage": {
      "enabled": false,
      "action": "warn"
    }
  },
  "maxFileSizeMb": 20,
  "allowedFileTypes": [
    "image/jpeg",
    "image/png",
    "image/gif",
    "image/webp"
  ],
  "piiAction": "redact",
  "createdAt": "2024-01-10T10:00:00Z",
  "updatedAt": "2024-01-16T09:15:00Z"
}

Reset Configuration

Reset guardrail configuration to defaults.
organizationId
string
required
Organization ID
curl -X POST https://api.llmgateway.io/guardrails/config/org_abc123/reset \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"

List Custom Rules

Retrieve all custom guardrail rules for an organization.
organizationId
string
required
Organization ID
rules
array
Array of custom rule objects
curl https://api.llmgateway.io/guardrails/rules/org_abc123 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
  "rules": [
    {
      "id": "rule_123",
      "organizationId": "org_abc123",
      "name": "Block Competitor Names",
      "type": "blocked_terms",
      "config": {
        "type": "blocked_terms",
        "terms": ["CompetitorA", "CompetitorB"],
        "matchType": "contains",
        "caseSensitive": false
      },
      "priority": 100,
      "enabled": true,
      "action": "block",
      "createdAt": "2024-01-10T10:00:00Z",
      "updatedAt": "2024-01-10T10:00:00Z"
    },
    {
      "id": "rule_456",
      "organizationId": "org_abc123",
      "name": "Credit Card Pattern",
      "type": "custom_regex",
      "config": {
        "type": "custom_regex",
        "pattern": "\\b\\d{4}[\\s-]?\\d{4}[\\s-]?\\d{4}[\\s-]?\\d{4}\\b"
      },
      "priority": 90,
      "enabled": true,
      "action": "redact",
      "createdAt": "2024-01-12T14:30:00Z",
      "updatedAt": "2024-01-12T14:30:00Z"
    }
  ]
}

Create Custom Rule

Create a new custom guardrail rule.
organizationId
string
required
Organization ID
name
string
required
Rule name
type
'blocked_terms' | 'custom_regex' | 'topic_restriction'
required
Rule type
config
object
required
Rule configuration (varies by type)
priority
number
default:100
Rule priority (higher = checked first)
enabled
boolean
default:true
Enable the rule
action
'block' | 'redact' | 'warn' | 'allow'
default:"block"
Action to take on violation

Rule Types

Blocked Terms

Block or redact specific words/phrases:
{
  "type": "blocked_terms",
  "config": {
    "type": "blocked_terms",
    "terms": ["badword1", "badword2"],
    "matchType": "contains",
    "caseSensitive": false
  }
}
Match Types:
  • exact: Exact word match
  • contains: Substring match
  • regex: Regular expression pattern

Custom Regex

Match content using regex patterns:
{
  "type": "custom_regex",
  "config": {
    "type": "custom_regex",
    "pattern": "\\b\\d{3}-\\d{2}-\\d{4}\\b"
  }
}

Topic Restriction

Block or allow specific topics:
{
  "type": "topic_restriction",
  "config": {
    "type": "topic_restriction",
    "blockedTopics": ["violence", "illegal activities"],
    "allowedTopics": ["technology", "education"]
  }
}
curl -X POST https://api.llmgateway.io/guardrails/rules/org_abc123 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Block Profanity",
    "type": "blocked_terms",
    "config": {
      "type": "blocked_terms",
      "terms": ["badword1", "badword2"],
      "matchType": "contains",
      "caseSensitive": false
    },
    "priority": 100,
    "action": "block"
  }'
{
  "id": "rule_789",
  "organizationId": "org_abc123",
  "name": "Block Profanity",
  "type": "blocked_terms",
  "config": {
    "type": "blocked_terms",
    "terms": ["badword1", "badword2"],
    "matchType": "contains",
    "caseSensitive": false
  },
  "priority": 100,
  "enabled": true,
  "action": "block",
  "createdAt": "2024-01-16T10:00:00Z",
  "updatedAt": "2024-01-16T10:00:00Z"
}

Update Custom Rule

Update an existing custom rule.
organizationId
string
required
Organization ID
ruleId
string
required
Rule ID
curl -X PATCH https://api.llmgateway.io/guardrails/rules/org_abc123/rule_789 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "enabled": false,
    "priority": 110
  }'

Delete Custom Rule

Delete a custom guardrail rule.
organizationId
string
required
Organization ID
ruleId
string
required
Rule ID
curl -X DELETE https://api.llmgateway.io/guardrails/rules/org_abc123/rule_789 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
  "success": true
}

List Violations

Retrieve guardrail violations with filtering and pagination.
organizationId
string
required
Organization ID
cursor
string
Pagination cursor from previous response
limit
number
default:50
Number of results per page (1-100)
startDate
string
Filter by start date (ISO 8601 with timezone)
endDate
string
Filter by end date (ISO 8601 with timezone)
actionTaken
'blocked' | 'redacted' | 'warned'
Filter by action taken
ruleId
string
Filter by specific rule ID
curl "https://api.llmgateway.io/guardrails/violations/org_abc123?limit=50&actionTaken=blocked" \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
  "violations": [
    {
      "id": "vio_123",
      "organizationId": "org_abc123",
      "logId": "log_456",
      "ruleId": "system:prompt_injection",
      "ruleName": "Prompt Injection Detection",
      "category": "injection",
      "actionTaken": "blocked",
      "matchedPattern": "ignore previous instructions",
      "matchedContent": "Ignore previous instructions and...",
      "contentHash": "abc123def456",
      "apiKeyId": "key_xyz789",
      "model": "gpt-4o",
      "createdAt": "2024-01-16T10:30:00Z"
    }
  ],
  "pagination": {
    "nextCursor": "vio_124",
    "hasMore": true,
    "limit": 50
  }
}

Get Violation Statistics

Retrieve aggregated violation statistics.
organizationId
string
required
Organization ID
days
number
default:7
Number of days to analyze (1-90)
curl "https://api.llmgateway.io/guardrails/stats/org_abc123?days=30" \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
  "blocked": 45,
  "redacted": 23,
  "warned": 12,
  "total": 80
}

Test Content

Test content against guardrail rules without logging violations.
organizationId
string
required
Organization ID
content
string
required
Content to test
curl -X POST https://api.llmgateway.io/guardrails/test/org_abc123 \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "content": "This is test content with a badword1 in it"
  }'
{
  "passed": false,
  "blocked": true,
  "violations": [
    {
      "ruleId": "rule_789",
      "ruleName": "Block Profanity",
      "category": "blocked_terms",
      "action": "block",
      "matchedPattern": "badword1",
      "matchedContent": "badword1"
    }
  ],
  "rulesChecked": 5
}

List System Rules

Get information about available system rules.
curl https://api.llmgateway.io/guardrails/system-rules \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"
{
  "rules": [
    {
      "id": "system:prompt_injection",
      "name": "Prompt Injection Detection",
      "category": "injection",
      "defaultEnabled": true,
      "defaultAction": "block"
    },
    {
      "id": "system:jailbreak",
      "name": "Jailbreak Prevention",
      "category": "jailbreak",
      "defaultEnabled": true,
      "defaultAction": "block"
    },
    {
      "id": "system:pii_detection",
      "name": "PII Detection",
      "category": "pii",
      "defaultEnabled": true,
      "defaultAction": "redact"
    },
    {
      "id": "system:secrets",
      "name": "Secrets Detection",
      "category": "secrets",
      "defaultEnabled": true,
      "defaultAction": "block"
    },
    {
      "id": "system:file_types",
      "name": "File Type Restrictions",
      "category": "files",
      "defaultEnabled": true,
      "defaultAction": "block"
    },
    {
      "id": "system:document_leakage",
      "name": "Document Leakage Prevention",
      "category": "document_leakage",
      "defaultEnabled": false,
      "defaultAction": "warn"
    }
  ]
}

Guardrail Actions

Each rule can have one of four actions:

Block

{"action": "block"}
Reject the request entirely. Returns a 400 error. Use for:
  • Security threats (prompt injection, jailbreaks)
  • Policy violations
  • Prohibited content

Redact

{"action": "redact"}
Remove/mask the matched content and continue processing. Use for:
  • PII (emails, phone numbers, SSNs)
  • Sensitive data
  • Credit card numbers

Warn

{"action": "warn"}
Log the violation but allow the request to proceed. Use for:
  • Monitoring suspicious patterns
  • Low-risk content
  • Testing new rules

Allow

{"action": "allow"}
Explicitly allow content (for whitelisting). Use for:
  • Overriding other rules
  • Approved exceptions
  • False positive handling

Error Responses

{
  "message": "Guardrails require an enterprise plan"
}

Best Practices

Start Conservative
  • Enable core system rules (prompt injection, jailbreak, secrets)
  • Use “warn” action initially to monitor false positives
  • Gradually tighten rules based on violation patterns
Layer Your Defense
  • Combine system rules with custom rules
  • Use both blocking and redaction
  • Set appropriate priorities for rule evaluation
Monitor and Iterate
  • Review violation logs regularly
  • Test new rules before enabling
  • Adjust actions based on false positive rates
  • Use test endpoint before deploying changes
Performance Impact
  • Guardrails add 50-200ms latency per request
  • More rules = higher latency
  • Prioritize critical rules (higher priority number)
  • Disable unused rules

Build docs developers (and LLMs) love