Skip to main content
Mastra is a TypeScript framework for building AI agents. Superserve provides a production-ready deployment platform with isolation, persistence, and governance.

Quick Start

1

Install the CLI

curl -fsSL https://superserve.ai/install | sh
2

Create your agent

Create a file called agent.ts with your Mastra agent:
agent.ts
/**
 * Minimal chatbot built with Mastra deployed on Superserve.
 */

import { Agent } from "@mastra/core/agent"
import { createInterface } from "readline"

const agent = new Agent({
  name: "assistant",
  instructions: "You are a helpful assistant.",
  model: "openai/gpt-4o",
})

const rl = createInterface({ input: process.stdin })
rl.on("line", async (line) => {
  const result = await agent.generate(line)
  console.log(result.text)
})
Create a package.json with dependencies:
package.json
{
  "dependencies": {
    "@mastra/core": "latest"
  }
}
3

Deploy your agent

Log in and deploy your agent:
superserve login
superserve deploy agent.ts --name chatbot
4

Set your API key

Configure your OpenAI API key as a secret:
superserve secrets set chatbot OPENAI_API_KEY=sk-...
Secrets are encrypted at rest and injected at the network level. The agent never sees them in logs or LLM context.
5

Run your agent

Start an interactive session:
superserve run chatbot
You > What is the capital of France?

Agent > The capital of France is Paris.

Completed in 1.2s

Configuration Options

Mastra supports various configuration options when creating an Agent:
agent.ts
import { Agent } from "@mastra/core/agent"

const agent = new Agent({
  name: "assistant",
  instructions: "You are a helpful assistant.",
  model: "openai/gpt-4o",        // Model to use
  temperature: 0.7,               // Sampling temperature
  maxTokens: 4096,                // Maximum tokens per response
})

Model Selection

Mastra supports multiple LLM providers:
  • openai/gpt-4o - GPT-4 Omni (recommended)
  • openai/gpt-4-turbo - GPT-4 Turbo
  • anthropic/claude-3-5-sonnet-20241022 - Claude 3.5 Sonnet
  • google/gemini-pro - Google Gemini Pro

Adding Tools

Mastra supports tools for extending agent capabilities:
agent.ts
import { Agent, createTool } from "@mastra/core"
import { createInterface } from "readline"
import { z } from "zod"

const getWeather = createTool({
  id: "getWeather",
  description: "Get the current weather for a location",
  inputSchema: z.object({
    location: z.string().describe("The city name"),
  }),
  execute: async ({ location }) => {
    // Your weather API logic here
    return { weather: `The weather in ${location} is sunny.` }
  },
})

const agent = new Agent({
  name: "weather-assistant",
  instructions: "You are a helpful weather assistant.",
  model: "openai/gpt-4o",
  tools: [getWeather],
})

const rl = createInterface({ input: process.stdin })
rl.on("line", async (line) => {
  const result = await agent.generate(line)
  console.log(result.text)
})

Streaming Responses

Stream responses token by token for better UX:
agent.ts
import { Agent } from "@mastra/core/agent"
import { createInterface } from "readline"

const agent = new Agent({
  name: "assistant",
  instructions: "You are a helpful assistant.",
  model: "openai/gpt-4o",
})

const rl = createInterface({ input: process.stdin })
rl.on("line", async (line) => {
  const stream = await agent.stream(line)
  
  for await (const chunk of stream) {
    process.stdout.write(chunk.text)
  }
  
  console.log() // New line after streaming completes
})

Deployment Configuration

Create a superserve.yaml file for advanced deployment options:
superserve.yaml
name: chatbot
command: bun run agent.ts
secrets:
  - OPENAI_API_KEY
ignore:
  - node_modules
  - "*.log"
  - .git
Then deploy with:
superserve deploy

Dependencies

Manage dependencies in your package.json:
package.json
{
  "name": "my-chatbot",
  "version": "0.1.0",
  "dependencies": {
    "@mastra/core": "latest",
    "zod": "^3.22.0"
  },
  "devDependencies": {
    "@types/node": "^20.0.0",
    "typescript": "^5.0.0"
  }
}
Superserve automatically runs npm install or bun install during deployment.

Session Persistence

The /workspace directory persists across turns and restarts. Here’s an example that saves conversation history:
agent.ts
import { Agent } from "@mastra/core/agent"
import { createInterface } from "readline"
import { readFileSync, writeFileSync, existsSync } from "fs"
import { join } from "path"

const WORKSPACE = "/workspace"
const HISTORY_FILE = join(WORKSPACE, "conversation_history.json")

interface Message {
  role: string
  content: string
}

function saveMessage(role: string, content: string) {
  let history: Message[] = []
  if (existsSync(HISTORY_FILE)) {
    history = JSON.parse(readFileSync(HISTORY_FILE, "utf-8"))
  }
  history.push({ role, content })
  writeFileSync(HISTORY_FILE, JSON.stringify(history, null, 2))
}

const agent = new Agent({
  name: "assistant",
  instructions: "You are a helpful assistant with memory.",
  model: "openai/gpt-4o",
})

// Load conversation history
if (existsSync(HISTORY_FILE)) {
  console.log("Resuming previous conversation...")
}

const rl = createInterface({ input: process.stdin })
rl.on("line", async (line) => {
  saveMessage("user", line)
  
  const result = await agent.generate(line)
  console.log(result.text)
  
  saveMessage("assistant", result.text)
})

Multi-Agent Workflows

Build multi-agent systems with Mastra:
agent.ts
import { Agent } from "@mastra/core/agent"
import { createInterface } from "readline"

const researcher = new Agent({
  name: "researcher",
  instructions: "You research topics and provide detailed information.",
  model: "openai/gpt-4o",
})

const writer = new Agent({
  name: "writer",
  instructions: "You write clear, engaging content based on research.",
  model: "openai/gpt-4o",
})

const rl = createInterface({ input: process.stdin })
rl.on("line", async (line) => {
  // Research phase
  const researchResult = await researcher.generate(line)
  
  // Writing phase
  const writePrompt = `Based on this research: ${researchResult.text}\n\nWrite a summary.`
  const writeResult = await writer.generate(writePrompt)
  
  console.log(writeResult.text)
})

Using Bun vs Node.js

Mastra works with both Bun and Node.js. Superserve automatically detects which runtime to use:
name: chatbot
command: bun run agent.ts
Bun is recommended for faster startup times and better performance.

Troubleshooting

Make sure you have a package.json with @mastra/core listed in dependencies. Redeploy your agent:
superserve deploy agent.ts --name chatbot
Set your API key as a secret:
superserve secrets set chatbot OPENAI_API_KEY=sk-...
Make sure you have TypeScript as a dev dependency:
{
  "devDependencies": {
    "typescript": "^5.0.0",
    "@types/node": "^20.0.0"
  }
}

Next Steps

Core Concepts

Learn about isolation, persistence, and credentials

CLI Reference

Explore deployment options and CLI commands

Secrets Management

Manage API keys and environment variables

Session Management

Work with persistent sessions

Build docs developers (and LLMs) love