Let’s build a simple travel assistant agent that can recommend destinations and plan itineraries. This guide will walk you through the core concepts of PromptSmith.
1
Import the Builder
Start by importing the createPromptBuilder function from PromptSmith:
import { createPromptBuilder } from "promptsmith-ts/builder";import { generateText } from "ai";import { openai } from "@ai-sdk/openai";
Make sure you’ve already installed the required packages. See the Installation guide if you haven’t.
2
Create the Agent
Use the builder’s fluent API to configure your agent:
const agent = createPromptBuilder() .withIdentity("You are a helpful travel assistant") .withCapabilities([ "Recommend destinations based on user preferences", "Plan detailed itineraries", "Provide travel tips and advice", ]) .withTone("Enthusiastic, knowledgeable, and helpful");
What’s happening here?
withIdentity() - Defines who or what the agent is
withCapabilities() - Lists what the agent can do
withTone() - Sets the communication style
3
Generate a Response
Use the agent with Vercel AI SDK:
const { text } = await generateText({ model: openai("gpt-4"), ...agent.toAiSdk(), // Spreads { system, tools } prompt: "I want to visit Japan for 2 weeks. What should I see?",});console.log(text);
The .toAiSdk() method exports the complete configuration for Vercel AI SDK, including the system prompt and any tools.
import { createPromptBuilder } from "promptsmith-ts/builder";import { generateText } from "ai";import { openai } from "@ai-sdk/openai";async function main() { // Create a basic travel assistant const agent = createPromptBuilder() .withIdentity("You are a helpful travel assistant") .withCapabilities([ "Recommend destinations based on user preferences", "Plan detailed itineraries", "Provide travel tips and advice", ]) .withTone("Enthusiastic, knowledgeable, and helpful"); // Generate response const { text } = await generateText({ model: openai("gpt-4"), ...agent.toAiSdk(), prompt: "I want to visit Japan for 2 weeks. What should I see?", }); console.log(text);}main().catch(console.error);
Now let’s enhance our agent with a tool to fetch real-time weather data:
1
Define the Tool
Import Zod and create a tool definition with a schema:
import { z } from "zod";// Mock weather API (replace with real API in production)async function fetchWeather(location: string, units: string) { return { location, temperature: units === "celsius" ? "22°C" : "72°F", conditions: "Partly cloudy", humidity: "65%", };}
2
Register the Tool
Add the tool to your agent with .withTool():
const weatherAgent = createPromptBuilder() .withIdentity("You are a weather information assistant") .withCapabilities([ "Provide current weather conditions", "Answer weather-related questions", ]) .withTool({ name: "get_weather", description: "Get current weather for a location", schema: z.object({ location: z.string().describe("City name or coordinates"), units: z.enum(["celsius", "fahrenheit"]).default("celsius"), }), execute: async ({ location, units }) => { return await fetchWeather(location, units); }, }) .withConstraint("must", "Always use the weather tool for current conditions") .withTone("Friendly and informative");
The Zod schema provides full type inference for the execute function parameters. TypeScript knows that location is a string and units is “celsius” or “fahrenheit”.
3
Use the Agent
The tool is automatically included when you export to AI SDK:
const { text } = await generateText({ model: openai("gpt-4"), ...weatherAgent.toAiSdk(), // Includes both system prompt and tools prompt: "What's the weather like in Tokyo?",});
import { createPromptBuilder } from "promptsmith-ts/builder";import { generateText } from "ai";import { openai } from "@ai-sdk/openai";import { z } from "zod";// Mock weather APIasync function fetchWeather(location: string, units: string) { return { location, temperature: units === "celsius" ? "22°C" : "72°F", conditions: "Partly cloudy", humidity: "65%", };}async function main() { const weatherAgent = createPromptBuilder() .withIdentity("You are a weather information assistant") .withCapabilities([ "Provide current weather conditions", "Answer weather-related questions", ]) .withTool({ name: "get_weather", description: "Get current weather for a location", schema: z.object({ location: z.string().describe("City name or coordinates"), units: z.enum(["celsius", "fahrenheit"]).default("celsius"), }), execute: async ({ location, units }) => { return await fetchWeather(location, units); }, }) .withConstraint("must", "Always use the weather tool for current conditions") .withConstraint("must_not", "Provide weather information without checking the tool") .withTone("Friendly and informative"); const { text } = await generateText({ model: openai("gpt-4"), ...weatherAgent.toAiSdk(), prompt: "What's the weather like in Tokyo?", }); console.log(text);}main().catch(console.error);
For production agents, always enable security guardrails:
const secureAgent = createPromptBuilder() .withIdentity("You are a customer service assistant") .withCapabilities(["Answer questions", "Process returns"]) .withGuardrails() // Enable anti-prompt-injection security .withForbiddenTopics([ "Medical diagnosis or treatment advice", "Legal advice or interpretation of laws", "Financial investment recommendations", ]) .withConstraint("must", "Always verify user authentication before accessing personal data") .withConstraint("must_not", "Never store or log sensitive information");
Always enable guardrails for production agents that handle user input. Use .withGuardrails() to protect against prompt injection attacks.