Overview
ZeroEval provides type-safe prompt management with support for versioning, template variables, content hashing, and metadata. These types enable you to fetch, version, and track prompts throughout your application.
Type Definitions
PromptOptions
interface PromptOptions {
name: string;
content?: string;
variables?: Record<string, string>;
from?: 'latest' | 'explicit' | string;
}
Prompt
interface Prompt {
content: string;
version: number | null;
versionId: string | null;
taskId: string | null;
promptSlug: string | null;
tag: string | null;
isLatest: boolean;
model: string | null;
contentHash: string | null;
metadata: Record<string, unknown>;
source: 'server' | 'fallback';
}
interface PromptMetadata {
task: string;
variables?: Record<string, string>;
prompt_slug?: string;
prompt_version?: number;
prompt_version_id?: string;
content_hash?: string;
}
PromptResponse
interface PromptResponse {
prompt: string;
task_id: string | null;
version_id: string;
version: number;
tag: string | null;
is_latest: boolean;
content: string;
metadata: Record<string, unknown>;
model_id: string | null;
content_hash: string | null;
evaluation_type: 'binary' | 'scored' | null;
score_min: number | null;
score_max: number | null;
pass_threshold: number | null;
temperature: number | null;
created_by: string;
updated_by: string;
created_at: string;
updated_at: string;
}
PromptOptions Properties
The task name associated with the prompt. This identifies which prompt to fetch from ZeroEval.
Raw prompt content. Used as fallback when the server version is unavailable, or as the explicit content when from: 'explicit'.
Template variables to interpolate into the prompt. Variables are replaced using {{variable}} syntax in the prompt content.
from
'latest' | 'explicit' | string
Version control mode:
'latest': Fetch the latest version from the server (fails if none exists)
'explicit': Always use the provided content parameter (bypasses auto-optimization)
'<hash>': Fetch a specific version by its 64-character SHA-256 content hash
Default: 'latest'
Prompt Properties
The prompt content with variables interpolated.
The version number of the prompt.
The unique identifier for this prompt version.
The task UUID from the backend.
The prompt slug identifier.
Optional tag associated with this prompt version.
Whether this is the latest version of the prompt.
The model ID associated with this prompt.
SHA-256 hash of the prompt content for content-addressable versioning.
metadata
Record<string, unknown>
required
Arbitrary metadata associated with the prompt.
source
'server' | 'fallback'
required
Indicates whether the prompt was fetched from the server or is using fallback content.
Usage Examples
Basic Prompt Fetch
import { ze } from 'zeroeval';
const prompt = await ze.prompt({
name: 'customer-support-greeting'
});
console.log(prompt.content);
Prompt with Variables
const prompt = await ze.prompt({
name: 'personalized-email',
variables: {
userName: 'Alice',
productName: 'ZeroEval',
discountCode: 'SAVE20'
}
});
// Prompt content: "Hi {{userName}}, try {{productName}} with code {{discountCode}}!"
// Result: "Hi Alice, try ZeroEval with code SAVE20!"
Explicit Mode (Bypass Versioning)
const prompt = await ze.prompt({
name: 'test-prompt',
content: 'This is my explicit prompt content',
from: 'explicit'
});
// Always uses the provided content, no server fetch
Fallback Content
const prompt = await ze.prompt({
name: 'chat-greeting',
content: 'Hello! How can I help you today?',
from: 'latest'
});
// Tries to fetch from server
// If server is unavailable or prompt doesn't exist, uses fallback content
if (prompt.source === 'fallback') {
console.log('Using fallback prompt');
}
Fetch Specific Version by Hash
const prompt = await ze.prompt({
name: 'production-prompt',
from: 'a3b2c1d4e5f6789012345678901234567890123456789012345678901234abcd'
});
// Fetches the exact version with this content hash
const prompt = await ze.prompt({
name: 'sentiment-analysis',
variables: { text: 'This product is amazing!' }
});
console.log('Version:', prompt.version);
console.log('Is Latest:', prompt.isLatest);
console.log('Content Hash:', prompt.contentHash);
console.log('Metadata:', prompt.metadata);
Template Variable Interpolation
const prompt = await ze.prompt({
name: 'sql-query-generator',
variables: {
tableName: 'users',
column: 'email',
condition: 'active = true'
}
});
// Prompt template: "SELECT {{column}} FROM {{tableName}} WHERE {{condition}}"
// Result: "SELECT email FROM users WHERE active = true"
Working with Prompt Versions
// Fetch latest version
const latestPrompt = await ze.prompt({
name: 'classification',
from: 'latest'
});
// Store the content hash for reproducibility
const hash = latestPrompt.contentHash;
// Later, fetch exact same version
const samePrompt = await ze.prompt({
name: 'classification',
from: hash!
});
// Both prompts have identical content
assert(latestPrompt.content === samePrompt.content);
Prompt metadata is embedded in decorated prompts using the format:
<zeroeval>{JSON}</zeroeval>{content}
This metadata tracks version information and template variables:
const metadata: PromptMetadata = {
task: 'customer-greeting',
variables: { name: 'Alice' },
prompt_slug: 'greeting-v2',
prompt_version: 3,
prompt_version_id: 'version-uuid',
content_hash: 'sha256-hash'
};
PromptResponse
The PromptResponse type represents the raw response from the ZeroEval backend. The SDK automatically transforms this into the Prompt type for easier consumption.
Key fields:
evaluation_type
'binary' | 'scored' | null
The type of evaluation configured for this prompt:
'binary': Pass/fail evaluation
'scored': Numerical scoring
null: No evaluation configured
Minimum score for scored evaluations.
Maximum score for scored evaluations.
Threshold score for passing evaluations.
Suggested temperature parameter for LLM calls using this prompt.