Skip to main content
Veto supports multiple validation backends, each with different tradeoffs for latency, complexity, and capabilities.

Validation mode overview

Local (default)

Deterministic YAML rules evaluated locally with zero latency

API

External HTTP API for custom validation logic

Kernel

Local LLM via Ollama for semantic validation

Custom

OpenAI, Anthropic, Gemini, or OpenRouter for LLM validation

Cloud

Veto Cloud with team sync and approval workflows

Local mode

The default mode evaluates YAML rules locally using deterministic conditions. No API calls, no network latency.

Configuration

veto/veto.config.yaml
version: "1.0"
mode: "strict"
validation:
  mode: "local"

rules:
  directory: "./rules"

How it works

Local mode evaluates conditions using:
  1. Field-based conditions — Direct comparison of arguments
  2. AST expressions — Compiled policy expressions
  3. Sequential rules — Call history validation
veto/rules/transfers.yaml
rules:
  - id: block-large-transfers
    action: block
    tools: [transfer_funds]
    conditions:
      - field: arguments.amount
        operator: greater_than
        value: 10000

Performance

  • ~0.1ms overhead per tool call
  • No network requests
  • Fully offline

Use cases

High-frequency tool calls
Deterministic validation rules
Offline or air-gapped environments
Production systems requiring predictable latency

API mode

Send validation requests to an external HTTP endpoint. Useful for custom validation logic or integration with existing policy engines.

Configuration

veto/veto.config.yaml
version: "1.0"
mode: "strict"
validation:
  mode: "api"

api:
  baseUrl: "https://policy-engine.example.com"
  endpoint: "/validate"
  timeout: 5000
  retries: 2
  retryDelay: 1000

Request format

Veto sends a POST request with this payload:
{
  "context": {
    "call_id": "call_abc123",
    "tool_name": "transfer_funds",
    "arguments": {
      "amount": 15000,
      "recipient": "ACME Corp"
    },
    "timestamp": "2024-03-04T10:30:00Z",
    "session_id": "session_xyz",
    "agent_id": "agent_001"
  },
  "rules": [
    {
      "id": "block-large-transfers",
      "action": "block",
      "conditions": [...]
    }
  ]
}

Response format

Your API should return:
{
  "decision": "deny",
  "reason": "Transfer amount exceeds limit",
  "rule_id": "block-large-transfers",
  "severity": "high"
}
Decision values:
  • allow — Allow the tool call
  • deny — Block the tool call
  • require_approval — Route to human approval

Use cases

Integration with existing policy engines
Custom validation logic beyond YAML rules
Centralized policy management
Database lookups during validation

Kernel mode

Use a local LLM via Ollama for semantic validation. Rules are evaluated by the model instead of deterministic conditions.

Configuration

veto/veto.config.yaml
version: "1.0"
mode: "strict"
validation:
  mode: "kernel"

kernel:
  baseUrl: "http://localhost:11434"
  model: "llama3.2:3b"
  temperature: 0.0
  maxTokens: 500
  timeout: 10000

Setup

  1. Install Ollama: ollama.com
  2. Pull a model:
    ollama pull llama3.2:3b
    
  3. Start Ollama (runs on port 11434 by default)

How it works

Veto constructs a prompt for the LLM:
You are a policy validation system. Validate this tool call:

Tool: transfer_funds
Arguments: {"amount": 15000, "recipient": "ACME Corp"}

Rules:
- block-large-transfers: Transfers over $10,000 require manual approval

Respond with: {"decision": "allow" | "deny", "reason": "..."}
The model responds with a validation decision.

Use cases

Semantic validation (“is this a reasonable request?”)
Natural language policy rules
Privacy-sensitive environments (local LLM)
Prototype validation logic quickly
Kernel mode has higher latency (~500ms-2s) and non-deterministic behavior. Use for prototyping or low-frequency calls.

Custom mode

Use cloud LLM providers (OpenAI, Anthropic, Gemini, OpenRouter) for validation.

Configuration

veto/veto.config.yaml
version: "1.0"
mode: "strict"
validation:
  mode: "custom"

custom:
  provider: "openai"
  model: "gpt-4o-mini"
  apiKey: "sk-..."
  temperature: 0.0
  maxTokens: 500
  timeout: 10000

Environment variables

Store API keys in environment variables:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="AIza..."
export OPENROUTER_API_KEY="sk-or-..."
Omit apiKey from config and Veto will read from environment.

Use cases

Semantic validation with high accuracy
Natural language policy rules
Quick prototyping without local LLM setup
Custom mode sends tool call data to third-party APIs. Ensure compliance with your data handling policies.

Cloud mode

Use Veto Cloud for team policy sync, centralized management, and approval workflows.

Configuration

veto/veto.config.yaml
version: "1.0"
mode: "strict"
validation:
  mode: "cloud"

cloud:
  apiKey: "veto_..."
  baseUrl: "https://api.veto.so"
  timeout: 5000

Features

Policy sync

Centrally manage rules across all team repos

Approval workflows

Human-in-the-loop with approval dashboard

Dashboard

View decisions, blocked calls, and pending approvals

Audit export

Compliance reporting and audit trails

Setup

  1. Sign up at veto.so
  2. Create an API key
  3. Add to your config or environment:
    export VETO_API_KEY="veto_..."
    

Use cases

Team collaboration on policies
Centralized policy management
Human approval workflows at scale
Compliance reporting and audit trails

Operating modes

Independent of validation mode, you can set the operating mode:
mode: "strict"  # or "log" or "shadow"

Strict mode (default)

Block tool calls when validation fails:
mode: "strict"
When a rule matches with action: block, Veto throws ToolCallDeniedError.

Log mode

Log validation failures but allow all tool calls:
mode: "log"
Useful for:
  • Testing rules without blocking production
  • Gradual rollout of new policies
  • Observability without enforcement

Shadow mode

Compute real decisions but never block execution:
mode: "shadow"
All calls are allowed, but decisions are logged and exported. Use for A/B testing policies.

Choosing a validation mode

1

Start with local mode

Use deterministic YAML rules for predictable, low-latency validation.
2

Add semantic validation if needed

Use kernel or custom mode for natural language policies.
3

Scale with cloud mode

Move to Veto Cloud for team collaboration and approval workflows.

Comparison table

ModeLatencyCostDeterministicOfflineSemantic
Local~0.1msFree
API~50-200msVariableDepends on APIDepends on API
Kernel~500ms-2sFree
Custom~200-500msLLM pricing
Cloud~100-300msVeto pricingOptional

Next steps

How it works

Understand the validation flow

Rules

Learn the YAML rule format

Human-in-the-loop

Set up approval workflows

Writing rules

Best practices for rule design

Build docs developers (and LLMs) love