Overview
The Secure MCP Gateway provides two Python modules for protecting LangChain and LangGraph applications:- LangChain Module -
BaseCallbackHandlerfor any LangChain component - LangGraph Module -
pre_model_hookandpost_model_hookfor LangGraph agents
LangChain Integration
Use theEnkryptGuardrailsHandler to protect any LangChain component including LLMs, chains, agents, tools, and retrievers.
Installation
Configure Guardrails
Basic Usage
Supported Hooks
TheEnkryptGuardrailsHandler implements all LangChain BaseCallbackHandler methods:
| Hook | Description | Default Checks |
|---|---|---|
on_llm_start | Before LLM call | injection_attack, pii, toxicity |
on_llm_end | After LLM response | pii, toxicity, nsfw |
on_chat_model_start | Before chat model call | injection_attack, pii, toxicity |
on_chain_start | Before chain execution | injection_attack, pii |
on_chain_end | After chain completion | pii, toxicity |
on_tool_start | Before tool execution | injection_attack, pii |
on_tool_end | After tool execution | pii |
on_agent_action | On agent decision | injection_attack |
on_agent_finish | On agent completion | pii, toxicity, nsfw |
on_retriever_start | Before retriever query | injection_attack |
on_retriever_end | After document retrieval | pii |
Usage Examples
With Chains
With Agents
With Retrievers (RAG)
Audit-Only Mode
Disable Sensitive Tool Blocking
LangGraph Integration
For LangGraph’screate_react_agent, use pre_model_hook and post_model_hook.
Installation
Basic Usage
Convenience Functions
Tool Wrapping
Configuration
LangChain Configuration
guardrails_config.json
LangGraph Configuration
guardrails_config.json
Available Detectors
| Detector | Description |
|---|---|
injection_attack | Prompt injection attempts |
pii | Personal Identifiable Information |
toxicity | Toxic/harmful content |
nsfw | Not Safe For Work content |
keyword_detector | Banned keywords |
policy_violation | Custom policy violations |
bias | Biased content |
topic_detector | Off-topic content |
Logging
Logs are written to~/langchain/guardrails_logs/ or ~/langgraph/guardrails_logs/:
on_llm_start.jsonl/pre_model_hook.jsonl- Input validationon_llm_end.jsonl/post_model_hook.jsonl- Output validationon_tool_start.jsonl/before_tool_call.jsonl- Tool input checkscombined_audit.jsonl- All events combinedsecurity_alerts.jsonl- Security violations
View Logs
Metrics
Error Handling
Comparison
| Feature | LangChain Module | LangGraph Module |
|---|---|---|
| Hook Pattern | BaseCallbackHandler | pre_model_hook / post_model_hook |
| Scope | Any LangChain component | LangGraph agents only |
| Tool Hooks | on_tool_start/end | Tool wrappers |
| Chain Support | Yes | No (use state hooks) |
| Retriever Support | Yes | No |
| Agent Support | Yes | Yes |
- Standalone LangChain components
- Chains and pipelines
- RAG applications
- Any non-LangGraph agent
- LangGraph’s
create_react_agent - LangGraph workflows
Testing
Next Steps
CrewAI Integration
Protect multi-agent CrewAI systems
Configure Policies
Create custom guardrail policies
View Metrics
Monitor guardrails performance
Audit Logs
Review security events