Skip to main content
While PromptSmith provides native integration with Vercel AI SDK and Mastra, you can use it with any AI framework or custom implementation. At its core, PromptSmith generates structured system prompts that work with any LLM API.

Framework-Agnostic Usage

The fundamental output of PromptSmith is a formatted string - your system prompt. This works with any framework that accepts text prompts:
import { createPromptBuilder } from "promptsmith-ts/builder";

const agent = createPromptBuilder()
  .withIdentity("You are a helpful assistant")
  .withCapabilities(["Answer questions", "Provide information"])
  .withTone("Friendly and professional");

// Get the system prompt as a string
const systemPrompt = agent.build();

// Use with any framework that accepts a system prompt
console.log(systemPrompt);

LangChain Integration

Use PromptSmith to generate system prompts for LangChain:
1

Install dependencies

npm install promptsmith-ts langchain @langchain/openai
2

Create agent with PromptSmith

import { createPromptBuilder } from "promptsmith-ts/builder";
import { ChatOpenAI } from "@langchain/openai";
import { SystemMessage, HumanMessage } from "@langchain/core/messages";

// Build system prompt
const promptBuilder = createPromptBuilder()
  .withIdentity("You are a helpful research assistant")
  .withCapabilities([
    "Search academic papers",
    "Summarize research findings",
    "Provide citations",
  ])
  .withConstraint("must", "Always cite sources for factual claims")
  .withTone("Professional and academic");

const systemPrompt = promptBuilder.build();

// Use with LangChain
const model = new ChatOpenAI({
  modelName: "gpt-4",
  temperature: 0.7,
});

const messages = [
  new SystemMessage(systemPrompt),
  new HumanMessage("Tell me about recent AI safety research"),
];

const response = await model.invoke(messages);
console.log(response.content);

LangChain with Tools

For tool usage with LangChain, define tools separately and reference them in your prompt:
import { createPromptBuilder } from "promptsmith-ts/builder";
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";

// Define LangChain tools
const searchTool = new DynamicStructuredTool({
  name: "search_papers",
  description: "Search academic papers by topic",
  schema: z.object({
    topic: z.string().describe("Research topic"),
    limit: z.number().optional().describe("Max results"),
  }),
  func: async ({ topic, limit = 5 }) => {
    return `Found ${limit} papers on ${topic}`;
  },
});

// Build prompt with tool documentation
const promptBuilder = createPromptBuilder()
  .withIdentity("Research assistant with tool access")
  .withCapabilities(["Search papers using available tools"])
  .withTool({
    name: searchTool.name,
    description: searchTool.description,
    schema: searchTool.schema,
  });

const systemPrompt = promptBuilder.build();

// Create agent with tools
const model = new ChatOpenAI({
  modelName: "gpt-4",
  temperature: 0,
}).bind({
  tools: [searchTool],
});

OpenAI SDK Integration

Use directly with OpenAI’s official SDK:
1

Install dependencies

npm install promptsmith-ts openai
2

Use with OpenAI SDK

import { createPromptBuilder } from "promptsmith-ts/builder";
import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

const agent = createPromptBuilder()
  .withIdentity("You are a coding assistant")
  .withCapabilities(["Write code", "Debug issues", "Explain concepts"])
  .withTone("Patient and educational");

const completion = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [
    {
      role: "system",
      content: agent.build(),
    },
    {
      role: "user",
      content: "Explain how async/await works in JavaScript",
    },
  ],
});

console.log(completion.choices[0].message.content);

OpenAI Function Calling

For function calling with the OpenAI SDK:
import { createPromptBuilder } from "promptsmith-ts/builder";
import OpenAI from "openai";
import { zodToJsonSchema } from "zod-to-json-schema";
import { z } from "zod";

const openai = new OpenAI();

// Define tool with PromptSmith
const weatherSchema = z.object({
  location: z.string().describe("City name"),
  units: z.enum(["celsius", "fahrenheit"]).default("celsius"),
});

const agent = createPromptBuilder()
  .withIdentity("Weather assistant")
  .withTool({
    name: "get_weather",
    description: "Get current weather for a location",
    schema: weatherSchema,
  });

// Convert to OpenAI function format
const functions = [
  {
    name: "get_weather",
    description: "Get current weather for a location",
    parameters: zodToJsonSchema(weatherSchema),
  },
];

const completion = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [
    { role: "system", content: agent.build() },
    { role: "user", content: "What's the weather in Tokyo?" },
  ],
  functions,
  function_call: "auto",
});

Anthropic SDK Integration

Use with Anthropic’s Claude:
import { createPromptBuilder } from "promptsmith-ts/builder";
import Anthropic from "@anthropic-ai/sdk";

const anthropic = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

const agent = createPromptBuilder()
  .withIdentity("You are a creative writing assistant")
  .withCapabilities(["Generate stories", "Provide feedback", "Edit text"])
  .withTone("Creative and encouraging");

const message = await anthropic.messages.create({
  model: "claude-3-5-sonnet-20241022",
  max_tokens: 1024,
  messages: [
    {
      role: "user",
      content: "Help me write a short story about time travel",
    },
  ],
  system: agent.build(),
});

console.log(message.content);

Custom Framework Integration

For custom implementations or frameworks not listed here:
import { createPromptBuilder } from "promptsmith-ts/builder";
import { z } from "zod";

// Build your agent configuration
const agent = createPromptBuilder()
  .withIdentity("Custom assistant")
  .withCapabilities(["Capability 1", "Capability 2"])
  .withTool({
    name: "custom_tool",
    description: "A custom tool",
    schema: z.object({
      param: z.string(),
    }),
    execute: async ({ param }) => {
      return `Processed: ${param}`;
    },
  });

// Export as needed for your framework
const config = {
  systemPrompt: agent.build(),
  tools: agent.getTools(),
  // Add any other framework-specific config
};

// Use with your custom implementation
async function callYourLLM(userMessage: string) {
  const response = await yourCustomLLMAPI({
    system: config.systemPrompt,
    messages: [{ role: "user", content: userMessage }],
    // Your framework's specific parameters
  });

  return response;
}

Exporting Configuration

As JSON

Export your complete configuration:
import { createPromptBuilder } from "promptsmith-ts/builder";
import { z } from "zod";

const agent = createPromptBuilder()
  .withIdentity("Assistant")
  .withCapabilities(["Task 1", "Task 2"])
  .withTool({
    name: "tool1",
    description: "Tool description",
    schema: z.object({ param: z.string() }),
  });

// Export as JSON (tools may not serialize perfectly)
const config = agent.toJSON();

console.log(JSON.stringify(config, null, 2));

As Separate Components

import { createPromptBuilder } from "promptsmith-ts/builder";

const agent = createPromptBuilder()
  .withIdentity("Assistant")
  .withCapabilities(["Capability"])
  .withTool({ name: "tool", description: "Tool", schema: z.object({}) });

// Get individual components
const systemPrompt = agent.build();
const tools = agent.getTools();
const summary = agent.getSummary();

console.log("System Prompt:", systemPrompt);
console.log("Tools:", tools.length);
console.log("Summary:", summary);

Format Optimization

Optimize token usage with different formats:
import { createPromptBuilder } from "promptsmith-ts/builder";

const agent = createPromptBuilder()
  .withIdentity("Assistant")
  .withCapabilities(["Task 1", "Task 2", "Task 3"])
  .withFormat("toon"); // Use TOON format for 30-60% token reduction

const optimizedPrompt = agent.build();

// Or temporarily override format
const markdownPrompt = agent.build("markdown");
const compactPrompt = agent.build("compact");
const toonPrompt = agent.build("toon");

console.log(`Markdown: ${markdownPrompt.length} chars`);
console.log(`Compact: ${compactPrompt.length} chars`);
console.log(`TOON: ${toonPrompt.length} chars`);

Best Practices

Let PromptSmith handle prompt structure while you focus on agent behavior:
const agent = createPromptBuilder()
  .withIdentity("Your agent's role")
  .withCapabilities(["What it can do"])
  .withConstraints(["How it should behave"]);
Use built-in validation to catch issues:
const validation = agent.validate();

if (!validation.isValid) {
  console.error("Issues:", validation.errors);
  // Fix issues before using
}
Use debug mode to inspect your configuration:
const agent = createPromptBuilder()
  .withIdentity("Assistant")
  .withCapabilities(["Task"])
  .debug(); // Prints detailed info
Use the testing framework:
import { createTester } from "promptsmith-ts/tester";

const tester = createTester();
const results = await tester.test({
  prompt: agent,
  provider: yourLLMProvider,
  testCases: [
    {
      query: "Test query",
      expectedBehavior: "Expected response pattern",
    },
  ],
});

Framework-Specific Tips

For REST API Integrations

import { createPromptBuilder } from "promptsmith-ts/builder";

const agent = createPromptBuilder()
  .withIdentity("API assistant")
  .withCapabilities(["Process requests"]);

// Use in API handler
app.post("/chat", async (req, res) => {
  const { message } = req.body;

  const response = await fetch("https://api.your-llm-provider.com/chat", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      system: agent.build(),
      messages: [{ role: "user", content: message }],
    }),
  });

  const data = await response.json();
  res.json(data);
});

For Streaming Responses

import { createPromptBuilder } from "promptsmith-ts/builder";

const agent = createPromptBuilder()
  .withIdentity("Streaming assistant")
  .withCapabilities(["Stream responses"]);

async function* streamResponse(userMessage: string) {
  const stream = await yourLLMProvider.stream({
    system: agent.build(),
    messages: [{ role: "user", content: userMessage }],
  });

  for await (const chunk of stream) {
    yield chunk;
  }
}

Next Steps

Vercel AI SDK

Native integration with Vercel AI SDK

Mastra Integration

Full-featured agent framework integration

Build docs developers (and LLMs) love