Validation mode overview
Local (default)
Deterministic YAML rules evaluated locally with zero latency
API
External HTTP API for custom validation logic
Kernel
Local LLM via Ollama for semantic validation
Custom
OpenAI, Anthropic, Gemini, or OpenRouter for LLM validation
Cloud
Veto Cloud with team sync and approval workflows
Local mode
The default mode evaluates YAML rules locally using deterministic conditions. No API calls, no network latency.Configuration
veto/veto.config.yaml
How it works
Local mode evaluates conditions using:- Field-based conditions — Direct comparison of arguments
- AST expressions — Compiled policy expressions
- Sequential rules — Call history validation
veto/rules/transfers.yaml
Performance
- ~0.1ms overhead per tool call
- No network requests
- Fully offline
Use cases
High-frequency tool calls
Deterministic validation rules
Offline or air-gapped environments
Production systems requiring predictable latency
API mode
Send validation requests to an external HTTP endpoint. Useful for custom validation logic or integration with existing policy engines.Configuration
veto/veto.config.yaml
Request format
Veto sends a POST request with this payload:Response format
Your API should return:allow— Allow the tool calldeny— Block the tool callrequire_approval— Route to human approval
Use cases
Integration with existing policy engines
Custom validation logic beyond YAML rules
Centralized policy management
Database lookups during validation
Kernel mode
Use a local LLM via Ollama for semantic validation. Rules are evaluated by the model instead of deterministic conditions.Configuration
veto/veto.config.yaml
Setup
- Install Ollama: ollama.com
- Pull a model:
- Start Ollama (runs on port 11434 by default)
How it works
Veto constructs a prompt for the LLM:Use cases
Semantic validation (“is this a reasonable request?”)
Natural language policy rules
Privacy-sensitive environments (local LLM)
Prototype validation logic quickly
Custom mode
Use cloud LLM providers (OpenAI, Anthropic, Gemini, OpenRouter) for validation.Configuration
- OpenAI
- Anthropic
- Google Gemini
- OpenRouter
veto/veto.config.yaml
Environment variables
Store API keys in environment variables:apiKey from config and Veto will read from environment.
Use cases
Semantic validation with high accuracy
Natural language policy rules
Quick prototyping without local LLM setup
Cloud mode
Use Veto Cloud for team policy sync, centralized management, and approval workflows.Configuration
veto/veto.config.yaml
Features
Policy sync
Centrally manage rules across all team repos
Approval workflows
Human-in-the-loop with approval dashboard
Dashboard
View decisions, blocked calls, and pending approvals
Audit export
Compliance reporting and audit trails
Setup
- Sign up at veto.so
- Create an API key
- Add to your config or environment:
Use cases
Team collaboration on policies
Centralized policy management
Human approval workflows at scale
Compliance reporting and audit trails
Operating modes
Independent of validation mode, you can set the operating mode:Strict mode (default)
Block tool calls when validation fails:action: block, Veto throws ToolCallDeniedError.
Log mode
Log validation failures but allow all tool calls:- Testing rules without blocking production
- Gradual rollout of new policies
- Observability without enforcement
Shadow mode
Compute real decisions but never block execution:Choosing a validation mode
Comparison table
| Mode | Latency | Cost | Deterministic | Offline | Semantic |
|---|---|---|---|---|---|
| Local | ~0.1ms | Free | ✅ | ✅ | ❌ |
| API | ~50-200ms | Variable | Depends on API | ❌ | Depends on API |
| Kernel | ~500ms-2s | Free | ❌ | ✅ | ✅ |
| Custom | ~200-500ms | LLM pricing | ❌ | ❌ | ✅ |
| Cloud | ~100-300ms | Veto pricing | ✅ | ❌ | Optional |
Next steps
How it works
Understand the validation flow
Rules
Learn the YAML rule format
Human-in-the-loop
Set up approval workflows
Writing rules
Best practices for rule design

