Skip to main content

Workspaces for agents

Isolation, persistence, and governance — one command, no config.

Why Superserve

Agents execute code, make HTTP requests, and manage credentials. In production, every session needs its own isolated environment with persistent state and governance. Building that yourself means stitching together containers, proxies, secret managers, and logging. Superserve gives every agent a governed workspace out of the box.

Key Features

Isolated by Default

Every session runs in its own Firecracker microVM. Nothing leaks between sessions or touches your infrastructure.

Nothing Disappears

The /workspace filesystem persists across turns, restarts, and days. Resume where you left off.

Credentials Stay Hidden

A credential proxy injects API keys at the network level. The agent never sees them — they never appear in LLM context, logs, or tool outputs.

Any Framework

Claude Agent SDK, OpenAI Agents SDK, LangChain, Mastra, Pydantic AI, or plain stdin/stdout.

One Command

superserve deploy agent.py. No Dockerfile, no server code, no config files.

Real-time Streaming

Stream tokens and tool calls as they happen with Server-Sent Events.

Sub-second Cold Starts

Pre-provisioned containers mean your agent starts almost instantly.

Encrypted Secrets

Set environment variables with superserve secrets set — they’re encrypted at rest and injected securely.

Quick Example

Install the CLI:
curl -fsSL https://superserve.ai/install | sh
Deploy your agent:
superserve login
superserve deploy agent.py
Set secrets and run:
superserve secrets set my-agent ANTHROPIC_API_KEY=sk-ant-...
superserve run my-agent
You > What is the capital of France?

Agent > The capital of France is Paris.

Completed in 1.2s

You > And what's its population?

Agent > Paris has approximately 2.1 million people in the city proper.

Completed in 0.8s

How It Works

1

Deploy your agent

Run superserve deploy agent.py to package and upload your agent code. Superserve analyzes dependencies and builds a container image.
2

Start a session

When you run superserve run my-agent, Superserve spins up a new Firecracker microVM with a persistent /workspace filesystem.
3

Send messages

Your messages are sent to the agent running in the isolated environment. The agent can execute code, make HTTP requests, and use tools.
4

Stream responses

Agent responses stream back in real-time via Server-Sent Events. You see tokens and tool calls as they happen.
5

Resume anytime

Session state persists across turns. Close the CLI and reconnect later — your workspace and conversation history are still there.

Use Cases

Deploy agents to production without managing infrastructure. Superserve handles isolation, scaling, and monitoring.
Each user gets their own isolated session with persistent state. Perfect for SaaS products with agent-powered features.
Run untrusted code safely in Firecracker microVMs. The agent can’t access your infrastructure or other sessions.
Agents can work on tasks across multiple turns and days. The persistent workspace means nothing gets lost.
Use any agent framework or write your own. Superserve works with Claude Agent SDK, OpenAI Agents SDK, LangChain, Mastra, Pydantic AI, and custom implementations.

Next Steps

Quickstart

Get up and running in 5 minutes

Core Concepts

Learn about isolation, persistence, and credentials

CLI Reference

Explore all CLI commands

TypeScript SDK

Integrate agents into your application

Build docs developers (and LLMs) love