Skip to main content

Monty

A minimal, secure Python interpreter written in Rust for use by AI agents

Why Monty?

Monty lets you safely run LLM-generated Python code embedded in your agent, with startup times measured in single-digit microseconds instead of hundreds of milliseconds. No containers, no complex sandboxing infrastructure — just fast, secure execution.

Blazing Fast

Sub-microsecond startup time and performance similar to CPython

Completely Isolated

Zero access to filesystem, environment, or network without explicit permission

Snapshot & Resume

Serialize interpreter state at any point and resume execution later

Multi-Language

Use from Python, JavaScript/TypeScript, or Rust

Get Started

Installation

Install Monty for Python, JavaScript, or Rust

Quickstart

Run your first sandboxed Python code in minutes

Core Concepts

Understand Monty’s security model and design

Key Features

Run untrusted Python code with strict security guarantees. Filesystem, network access, and environment variables are all implemented via external function calls you control.
Call host functions from sandboxed code. Only functions you explicitly provide are accessible, giving you complete control over I/O operations.
Built-in type checking with ty (from Astral/Ruff) included in a single binary. Catch type errors before execution.
Track and limit memory usage, allocations, stack depth, and execution time. Cancel execution if it exceeds preset limits.
Run async or sync code on the host via async or sync code in the sandbox. Full asyncio support included.

Use Cases

Monty is designed for one specific use case: running code written by AI agents.

Agent Code Mode

Let LLMs write Python code instead of using traditional tool calling

Pydantic AI Integration

Use Monty with Pydantic AI for code-mode execution

Web Scraping

Safely run LLM-generated web scraping scripts

Data Analysis

Execute data analysis code written by AI agents

Example

from typing import Any
import pydantic_monty

code = """
async def agent(prompt: str, messages: Messages):
    while True:
        output = await call_llm(prompt, messages)
        if isinstance(output, str):
            return output
        messages.extend(output)

await agent(prompt, [])
"""

m = pydantic_monty.Monty(code, inputs=['prompt'], type_check=True)

async def call_llm(prompt: str, messages: list[dict[str, Any]]) -> str | list[dict[str, Any]]:
    if len(messages) < 2:
        return [{'role': 'system', 'content': 'example response'}]
    else:
        return f'example output, message count {len(messages)}'

async def main():
    output = await pydantic_monty.run_monty_async(
        m,
        inputs={'prompt': 'testing'},
        external_functions={'call_llm': call_llm},
    )
    print(output)  # "example output, message count 2"

View Full API Reference

Explore the complete API for Python, JavaScript, and Rust

Build docs developers (and LLMs) love