Overview
The Secure MCP Gateway provides guardrails integrations for multiple AI frameworks and platforms beyond Claude Desktop and Cursor. Each integration follows the same pattern: hook-based guardrails that validate inputs and outputs.CrewAI
Protect your CrewAI multi-agent systems with guardrails for LLM calls and tool executions.
Features
- before_llm_call: Block unsafe prompts before LLM requests
- after_llm_call: Audit LLM responses
- before_tool_call: Validate tool inputs
- after_tool_call: Audit tool outputs
Installation
Configuration
Usage
Hook Events
| Hook | When It Runs | Purpose |
|---|---|---|
before_llm_call | Before LLM request | Block unsafe prompts |
after_llm_call | After LLM response | Audit outputs |
before_tool_call | Before tool execution | Validate tool inputs |
after_tool_call | After tool execution | Audit tool outputs |
Logs
Logs are written to~/crewai/hooks_logs/:
before_llm_call.jsonlafter_llm_call.jsonlbefore_tool_call.jsonlafter_tool_call.jsonlsecurity_alerts.jsonl
Kiro IDE
Kiro IDE hooks integration for prompt validation, agent response auditing, and file security scanning.
Features
- PromptSubmit: Block unsafe prompts
- AgentStop: Audit agent responses
- FileSave: Scan saved files for secrets/PII
- FileCreate: Validate new files
- Manual: On-demand security scanning
Installation
Configuration
Create Kiro hook files in.kiro/hooks/ directory:
before-prompt-guardrails.kiro.hook
Hook Types
| Trigger Type | When It Fires |
|---|---|
promptSubmit | Before user prompt is sent |
agentStop | After agent completes |
fileEdited | When file is saved |
fileCreated | When new file is created |
userTriggered | Manually triggered |
Logs
Logs are written to~/kiro/hooks_logs/:
PromptSubmit.jsonlAgentStop.jsonlFileSave.jsonlFileCreate.jsonlsecurity_alerts.jsonl
Strands Agents
Universal security guardrails for Strands Agents that work with ANY model provider - not just Amazon Bedrock.
Why Enkrypt Guardrails?
Strands Agents SDK has native guardrails support, but only for Amazon Bedrock. Enkrypt Guardrails work with OpenAI, Anthropic, Ollama, LiteLLM, and any other provider.Installation
Configuration
Usage
Hook Events
| Event | Purpose | Action |
|---|---|---|
MessageAddedEvent | Check user prompts & responses | Block/Log |
BeforeToolCallEvent | Validate tool inputs | Block (event.cancel_tool) |
AfterToolCallEvent | Audit tool outputs | Log/Warn |
AfterModelCallEvent | Monitor model responses | Log |
Usage Modes
Logs
Logs are written to~/strands/guardrails_logs/:
MessageAdded.jsonlBeforeToolCall.jsonlAfterToolCall.jsonlsecurity_alerts.jsonl
Vercel AI SDK
Middleware for Vercel AI SDK that protects AI applications with comprehensive guardrails.
Features
- Prompt injection detection: Block malicious prompts
- PII/secrets detection: Prevent sensitive data leaks
- Toxicity filtering: Filter harmful content
- Tool call protection: Monitor tool inputs/outputs
Installation
Configuration
Basic Usage
Hook Points
| Hook | When It Fires | What It Does |
|---|---|---|
transformParams | Before model call | Scans input prompt/messages |
wrapGenerate | After generateText | Scans generated output |
wrapStream | During streamText | Monitors streaming output |
prepareStep | Before each agent step | Validates step inputs |
onStepFinish | After each agent step | Audits step outputs |
onToolCall | When tools are called | Validates tool inputs/outputs |
Tool Protection
Logs
Logs are written to~/vercel-ai-sdk/guardrails_logs/:
combined_audit.jsonlsecurity_alerts.jsonlenkrypt_api_response.jsonl
OpenAI Agents SDK
Comprehensive security guardrails for the OpenAI Agents SDK with RunHooksBase integration.
Features
- Prompt injection detection: Block malicious prompts
- PII/secrets detection: Prevent sensitive data leaks
- Tool call monitoring: Audit tool inputs/outputs
- Agent handoff tracking: Monitor multi-agent workflows
Installation
Configuration
Basic Usage
Hook Events
| Hook | Description | Can Block |
|---|---|---|
on_agent_start | Before agent execution | Yes |
on_agent_end | After agent output | No (audit) |
on_llm_start | Before LLM call | Yes |
on_llm_end | After LLM response | No (audit) |
on_tool_start | Before tool execution | Yes |
on_tool_end | After tool execution | No (audit) |
on_handoff | When agent handoff occurs | No (audit) |
Multi-Agent Support
Logs
Logs are written to~/openai_agents/guardrails_logs/:
on_agent_start.jsonlon_llm_end.jsonlon_tool_start.jsonlcombined_audit.jsonlsecurity_alerts.jsonl
Common Configuration
All integrations use the same configuration format:guardrails_config.json
Environment Variables
| Variable | Description |
|---|---|
ENKRYPT_API_KEY | Your Enkrypt API key |
ENKRYPT_API_URL | API endpoint URL |
<CLIENT>_HOOKS_LOG_DIR | Log directory path |
<CLIENT>_HOOKS_LOG_RETENTION_DAYS | Log retention days |
Available Detectors
| Detector | Description | Use Case |
|---|---|---|
injection_attack | Prompt injection attempts | Block jailbreak attempts |
pii | Personal info & secrets | Prevent data leaks |
toxicity | Harmful content | Content moderation |
nsfw | Adult content | Content filtering |
keyword_detector | Banned keywords | Custom blocking |
policy_violation | Custom policies | Business rules |
bias | Biased content | Fair AI |
sponge_attack | Resource exhaustion | DoS prevention |
topic_detector | Off-topic content | Stay on task |
Testing
All integrations include test suites:Next Steps
Configure Policies
Create custom guardrail policies
View Metrics
Monitor guardrails performance
Audit Logs
Review security events
Add Detectors
Configure detection rules