The credential exposure problem
Agents make API calls to services like Anthropic, OpenAI, Stripe, or your own backend. Without proper credential management:- API keys appear in LLM context - The model sees the key in environment variables or code, increasing the risk of accidental leakage
- Keys leak into logs - Tool outputs and error messages expose credentials in plaintext
- Prompt injection attacks - A malicious user tricks the agent into printing
os.environ["ANTHROPIC_API_KEY"] - No audit trail - You can’t track which agent made which API call or revoke access per-session
Superserve’s credential proxy
Superserve injects API keys at the network level, not as environment variables. When your agent makes an HTTP request, the platform intercepts it, adds the appropriateAuthorization header, and forwards it to the destination.
The agent never sees the API key. It doesn’t appear in environment variables, tool outputs, or LLM context.
How it works
- You set secrets with
superserve secrets set my-agent ANTHROPIC_API_KEY=sk-ant-... - Superserve stores the key encrypted at rest
- When your agent makes a request to
api.anthropic.com, the proxy intercepts it - The proxy injects the
Authorization: Bearer sk-ant-...header - The request reaches the API with credentials
- The agent receives the response - no key in sight
Supported APIs
The proxy automatically handles authentication for common services:| Service | Environment Variable | Header Injected |
|---|---|---|
| Anthropic | ANTHROPIC_API_KEY | x-api-key: <key> |
| OpenAI | OPENAI_API_KEY | Authorization: Bearer <key> |
| Custom | <NAME>_API_KEY | Configured per secret |
You can configure custom header injection for any API. Contact support for advanced use cases.
Setting secrets
Use thesuperserve secrets command to manage credentials:
Environment variables for non-API secrets
Some credentials aren’t API keys sent in HTTP headers - database URLs, signing secrets, or configuration tokens. For these, Superserve injects them as environment variables inside the microVM:- API keys (e.g.,
ANTHROPIC_API_KEY,OPENAI_API_KEY) are intercepted at the network level - Non-API secrets (e.g.,
DATABASE_URL,JWT_SECRET) are injected as environment variables
What the agent sees
Let’s see exactly what’s visible inside the agent:SDK usage
Most agent SDKs automatically use environment variables for API keys. Since Superserve intercepts network requests, the SDK works without changes:Security guarantees
API keys never appear in LLM context
API keys never appear in LLM context
The model doesn’t see the key in environment variables, code, or tool outputs. It can’t accidentally leak them in responses.
Keys don't leak into logs
Keys don't leak into logs
Tool outputs, error messages, and debug logs don’t contain credentials:
Per-agent credential scoping
Per-agent credential scoping
Each agent has its own set of secrets. You can’t accidentally use Agent A’s API key in Agent B.Sessions for
agent-a will only see alice’s key injected. Sessions for agent-b will only see bob’s key.Revocation and rotation
Revocation and rotation
Change or revoke credentials without redeploying your agent:This is faster and safer than redeploying with a new environment variable.
Audit trail
Audit trail
Every HTTP request the agent makes is logged with:
- Destination URL
- Request method and headers (except credentials)
- Response status and size
- Session and agent ID
Limitations and edge cases
Non-HTTP secrets
For credentials that aren’t HTTP API keys (e.g., database URLs, SSH keys), use environment variables:Custom authentication schemes
Some APIs use non-standard authentication (HMAC signatures, custom headers, OAuth flows). The proxy supports custom configuration:Framework-specific considerations
Some frameworks cache API clients at import time. Ensure your agent creates clients after the proxy is initialized:Isolation
How sessions are isolated at the VM level
Deployment
Setting secrets during and after deployment
CLI Reference
All
superserve secrets commandsQuickstart
End-to-end example with secrets