Skip to main content
Guardrails enable you to validate and control agent inputs and outputs. They can block harmful content, enforce policies, or validate responses before they’re returned to users.

BaseGuardrail

The BaseGuardrail class is the abstract base for all guardrail implementations.
from agno.guardrails import BaseGuardrail
from agno.run.agent import RunInput

class MyGuardrail(BaseGuardrail):
    def check(self, run_input: RunInput) -> None:
        """Perform sync guardrail check."""
        # Raise exception to block execution
        if "bad_word" in run_input.message:
            raise ValueError("Blocked: inappropriate content")
    
    async def async_check(self, run_input: RunInput) -> None:
        """Perform async guardrail check."""
        if "bad_word" in run_input.message:
            raise ValueError("Blocked: inappropriate content")

Implementation

check()

Synchronous guardrail check.
def check(self, run_input: RunInput) -> None:
    """Validate input. Raise exception to block."""
    pass

async_check()

Asynchronous guardrail check.
async def async_check(self, run_input: RunInput) -> None:
    """Async validate input. Raise exception to block."""
    pass

Usage with Agents

Guardrails can be added as pre_hooks or post_hooks:
from agno import Agent
from agno.guardrails import BaseGuardrail

class ContentFilter(BaseGuardrail):
    def check(self, run_input: RunInput) -> None:
        if self._is_inappropriate(run_input.message):
            raise ValueError("Content blocked by filter")
    
    async def async_check(self, run_input: RunInput) -> None:
        if self._is_inappropriate(run_input.message):
            raise ValueError("Content blocked by filter")
    
    def _is_inappropriate(self, text: str) -> bool:
        # Your filtering logic
        return False

agent = Agent(
    model="gpt-4o",
    pre_hooks=[ContentFilter()]  # Runs before agent processes input
)

Example Guardrails

from agno.guardrails import BaseGuardrail

class LengthGuardrail(BaseGuardrail):
    def __init__(self, max_length: int = 1000):
        self.max_length = max_length
    
    def check(self, run_input: RunInput) -> None:
        if len(run_input.message) > self.max_length:
            raise ValueError(
                f"Input too long: {len(run_input.message)} > {self.max_length}"
            )
    
    async def async_check(self, run_input: RunInput) -> None:
        self.check(run_input)

agent = Agent(
    model="gpt-4o",
    pre_hooks=[LengthGuardrail(max_length=500)]
)

RunInput Structure

The RunInput object passed to guardrails contains:
  • message: The input message
  • user_id: User ID (if provided)
  • session_id: Session ID (if provided)
  • run_response: The response (for post_hooks)
  • Other run parameters

Best Practices

  1. Fast checks: Keep guardrail checks fast to avoid latency
  2. Clear errors: Provide clear error messages when blocking
  3. Logging: Log blocked requests for monitoring
  4. Async: Implement async_check for I/O operations
  5. Order matters: Cheaper checks first (length before API calls)
  6. Testing: Thoroughly test guardrails with edge cases
  7. Monitoring: Track how often guardrails trigger

Use Cases

  • Content moderation
  • PII detection and blocking
  • Rate limiting
  • Input validation
  • Output sanitization
  • Compliance enforcement
  • Cost control (token limits)
  • Policy enforcement

Build docs developers (and LLMs) love