Skip to main content

Overview

The Secure MCP Gateway provides guardrails integrations for multiple AI frameworks and platforms beyond Claude Desktop and Cursor. Each integration follows the same pattern: hook-based guardrails that validate inputs and outputs.

CrewAI

CrewAI Protect your CrewAI multi-agent systems with guardrails for LLM calls and tool executions.

Features

  • before_llm_call: Block unsafe prompts before LLM requests
  • after_llm_call: Audit LLM responses
  • before_tool_call: Validate tool inputs
  • after_tool_call: Audit tool outputs

Installation

cd hooks/crewai
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate
pip install -r hooks/requirements.txt

Configuration

cp hooks/guardrails_config_example.json hooks/guardrails_config.json
export ENKRYPT_API_KEY="your-api-key"

Usage

from crewai import Agent, Task, Crew
from enkrypt_guardrails import EnkryptGuardrailsContext

# Define your agents
researcher = Agent(
    role='Researcher',
    goal='Find accurate information',
    backstory='Expert researcher'
)

# Run with guardrails protection
with EnkryptGuardrailsContext():
    crew = Crew(agents=[researcher], tasks=[research_task])
    result = crew.kickoff(inputs={'topic': 'AI Safety'})

Hook Events

HookWhen It RunsPurpose
before_llm_callBefore LLM requestBlock unsafe prompts
after_llm_callAfter LLM responseAudit outputs
before_tool_callBefore tool executionValidate tool inputs
after_tool_callAfter tool executionAudit tool outputs

Logs

Logs are written to ~/crewai/hooks_logs/:
  • before_llm_call.jsonl
  • after_llm_call.jsonl
  • before_tool_call.jsonl
  • after_tool_call.jsonl
  • security_alerts.jsonl

Kiro IDE

Kiro Kiro IDE hooks integration for prompt validation, agent response auditing, and file security scanning.

Features

  • PromptSubmit: Block unsafe prompts
  • AgentStop: Audit agent responses
  • FileSave: Scan saved files for secrets/PII
  • FileCreate: Validate new files
  • Manual: On-demand security scanning

Installation

cd hooks/kiro
python -m venv venv
source venv/bin/activate
pip install -r hooks/requirements.txt

Configuration

Create Kiro hook files in .kiro/hooks/ directory:
before-prompt-guardrails.kiro.hook
{
  "enabled": true,
  "name": "Before Prompt Guardrails",
  "description": "Validates user prompts using Enkrypt AI Guardrails",
  "version": "1",
  "when": {
    "type": "promptSubmit"
  },
  "then": {
    "type": "runCommand",
    "command": "python hooks/kiro/hooks/prompt_submit.py"
  },
  "workspaceFolderName": "YOUR_WORKSPACE_NAME",
  "shortName": "before-prompt-guardrails"
}

Hook Types

Trigger TypeWhen It Fires
promptSubmitBefore user prompt is sent
agentStopAfter agent completes
fileEditedWhen file is saved
fileCreatedWhen new file is created
userTriggeredManually triggered

Logs

Logs are written to ~/kiro/hooks_logs/:
  • PromptSubmit.jsonl
  • AgentStop.jsonl
  • FileSave.jsonl
  • FileCreate.jsonl
  • security_alerts.jsonl

Strands Agents

Strands Universal security guardrails for Strands Agents that work with ANY model provider - not just Amazon Bedrock.

Why Enkrypt Guardrails?

Strands Agents SDK has native guardrails support, but only for Amazon Bedrock. Enkrypt Guardrails work with OpenAI, Anthropic, Ollama, LiteLLM, and any other provider.

Installation

cd hooks/strands
pip install -r requirements.txt

Configuration

cp guardrails_config_example.json guardrails_config.json
export ENKRYPT_API_KEY="your-api-key"

Usage

from strands import Agent
from enkrypt_guardrails_hook import EnkryptGuardrailsHook

# Create a protected agent
agent = Agent(
    system_prompt="You are a helpful assistant.",
    hooks=[EnkryptGuardrailsHook()]
)

# The agent is now protected!
response = agent("What is the capital of France?")

Hook Events

EventPurposeAction
MessageAddedEventCheck user prompts & responsesBlock/Log
BeforeToolCallEventValidate tool inputsBlock (event.cancel_tool)
AfterToolCallEventAudit tool outputsLog/Warn
AfterModelCallEventMonitor model responsesLog

Usage Modes

from enkrypt_guardrails_hook import EnkryptGuardrailsBlockingHook

agent = Agent(hooks=[EnkryptGuardrailsBlockingHook()])

Logs

Logs are written to ~/strands/guardrails_logs/:
  • MessageAdded.jsonl
  • BeforeToolCall.jsonl
  • AfterToolCall.jsonl
  • security_alerts.jsonl

Vercel AI SDK

Vercel Middleware for Vercel AI SDK that protects AI applications with comprehensive guardrails.

Features

  • Prompt injection detection: Block malicious prompts
  • PII/secrets detection: Prevent sensitive data leaks
  • Toxicity filtering: Filter harmful content
  • Tool call protection: Monitor tool inputs/outputs

Installation

npm install @enkrypt-ai/vercel-ai-sdk ai
# or
pnpm add @enkrypt-ai/vercel-ai-sdk ai

Configuration

cp guardrails-config.example.json guardrails-config.json
export ENKRYPT_API_KEY="your-api-key"

Basic Usage

import { generateText, wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
import { createEnkryptMiddleware } from '@enkrypt-ai/vercel-ai-sdk';

// Create a protected model
const protectedModel = wrapLanguageModel({
  model: openai('gpt-4'),
  middleware: createEnkryptMiddleware({
    blockOnViolation: true,
  }),
});

// Use as normal - inputs are automatically scanned
const { text } = await generateText({
  model: protectedModel,
  prompt: 'What is the weather in New York?',
});

Hook Points

HookWhen It FiresWhat It Does
transformParamsBefore model callScans input prompt/messages
wrapGenerateAfter generateTextScans generated output
wrapStreamDuring streamTextMonitors streaming output
prepareStepBefore each agent stepValidates step inputs
onStepFinishAfter each agent stepAudits step outputs
onToolCallWhen tools are calledValidates tool inputs/outputs

Tool Protection

import { generateText, tool, wrapLanguageModel } from 'ai';
import { z } from 'zod';
import { wrapToolWithGuardrails } from '@enkrypt-ai/vercel-ai-sdk';

const weatherTool = tool({
  description: 'Get weather for a city',
  parameters: z.object({ city: z.string() }),
  execute: async ({ city }) => {
    return { temperature: 72, conditions: 'sunny' };
  },
});

const protectedWeatherTool = wrapToolWithGuardrails(weatherTool, {
  checkInputs: true,
  checkOutputs: true,
  blockOnViolation: true,
});

Logs

Logs are written to ~/vercel-ai-sdk/guardrails_logs/:
  • combined_audit.jsonl
  • security_alerts.jsonl
  • enkrypt_api_response.jsonl

OpenAI Agents SDK

OpenAI Comprehensive security guardrails for the OpenAI Agents SDK with RunHooksBase integration.

Features

  • Prompt injection detection: Block malicious prompts
  • PII/secrets detection: Prevent sensitive data leaks
  • Tool call monitoring: Audit tool inputs/outputs
  • Agent handoff tracking: Monitor multi-agent workflows

Installation

pip install openai-agents requests
cp hooks/openai /path/to/your/project/

Configuration

cd /path/to/your/project/openai
cp guardrails_config_example.json guardrails_config.json
# Edit with your API key

Basic Usage

import asyncio
from agents import Agent, Runner
from enkrypt_guardrails_hook import EnkryptRunHooks

async def main():
    # Create hooks instance
    hooks = EnkryptRunHooks(
        block_on_violation=True,
        log_only_mode=False,
    )

    # Create your agent
    agent = Agent(
        name="Secure Assistant",
        instructions="You are a helpful assistant."
    )

    # Run with guardrails protection
    result = await Runner.run(
        agent,
        hooks=hooks,
        input="What is the capital of France?"
    )

    print(result.final_output)

asyncio.run(main())

Hook Events

HookDescriptionCan Block
on_agent_startBefore agent executionYes
on_agent_endAfter agent outputNo (audit)
on_llm_startBefore LLM callYes
on_llm_endAfter LLM responseNo (audit)
on_tool_startBefore tool executionYes
on_tool_endAfter tool executionNo (audit)
on_handoffWhen agent handoff occursNo (audit)

Multi-Agent Support

from agents import Agent, Runner
from enkrypt_guardrails_hook import EnkryptRunHooks

# Create specialized agents
math_agent = Agent(name="Math Agent", instructions="...")
writer_agent = Agent(name="Writer Agent", instructions="...")

# Router agent with handoffs
router = Agent(
    name="Router",
    instructions="Route to appropriate specialist",
    handoffs=[math_agent, writer_agent]
)

# Guardrails monitor all agents and handoffs
hooks = EnkryptRunHooks()
result = await Runner.run(router, hooks=hooks, input="Calculate 5 + 3")

Logs

Logs are written to ~/openai_agents/guardrails_logs/:
  • on_agent_start.jsonl
  • on_llm_end.jsonl
  • on_tool_start.jsonl
  • combined_audit.jsonl
  • security_alerts.jsonl

Common Configuration

All integrations use the same configuration format:
guardrails_config.json
{
  "enkrypt_api": {
    "url": "https://api.enkryptai.com/guardrails/policy/detect",
    "api_key": "YOUR_ENKRYPT_API_KEY",
    "ssl_verify": true,
    "timeout": 15,
    "fail_silently": true
  },
  "<hook_name>": {
    "enabled": true,
    "guardrail_name": "Sample Airline Guardrail",
    "block": ["injection_attack", "pii", "toxicity"]
  }
}

Environment Variables

VariableDescription
ENKRYPT_API_KEYYour Enkrypt API key
ENKRYPT_API_URLAPI endpoint URL
<CLIENT>_HOOKS_LOG_DIRLog directory path
<CLIENT>_HOOKS_LOG_RETENTION_DAYSLog retention days

Available Detectors

DetectorDescriptionUse Case
injection_attackPrompt injection attemptsBlock jailbreak attempts
piiPersonal info & secretsPrevent data leaks
toxicityHarmful contentContent moderation
nsfwAdult contentContent filtering
keyword_detectorBanned keywordsCustom blocking
policy_violationCustom policiesBusiness rules
biasBiased contentFair AI
sponge_attackResource exhaustionDoS prevention
topic_detectorOff-topic contentStay on task

Testing

All integrations include test suites:
cd hooks/<client>
pip install pytest
pytest tests/ -v

Next Steps

Configure Policies

Create custom guardrail policies

View Metrics

Monitor guardrails performance

Audit Logs

Review security events

Add Detectors

Configure detection rules

Build docs developers (and LLMs) love