Skip to main content

Overview

ZeroEval provides type-safe prompt management with support for versioning, template variables, content hashing, and metadata. These types enable you to fetch, version, and track prompts throughout your application.

Type Definitions

PromptOptions

interface PromptOptions {
  name: string;
  content?: string;
  variables?: Record<string, string>;
  from?: 'latest' | 'explicit' | string;
}

Prompt

interface Prompt {
  content: string;
  version: number | null;
  versionId: string | null;
  taskId: string | null;
  promptSlug: string | null;
  tag: string | null;
  isLatest: boolean;
  model: string | null;
  contentHash: string | null;
  metadata: Record<string, unknown>;
  source: 'server' | 'fallback';
}

PromptMetadata

interface PromptMetadata {
  task: string;
  variables?: Record<string, string>;
  prompt_slug?: string;
  prompt_version?: number;
  prompt_version_id?: string;
  content_hash?: string;
}

PromptResponse

interface PromptResponse {
  prompt: string;
  task_id: string | null;
  version_id: string;
  version: number;
  tag: string | null;
  is_latest: boolean;
  content: string;
  metadata: Record<string, unknown>;
  model_id: string | null;
  content_hash: string | null;
  evaluation_type: 'binary' | 'scored' | null;
  score_min: number | null;
  score_max: number | null;
  pass_threshold: number | null;
  temperature: number | null;
  created_by: string;
  updated_by: string;
  created_at: string;
  updated_at: string;
}

PromptOptions Properties

name
string
required
The task name associated with the prompt. This identifies which prompt to fetch from ZeroEval.
content
string
Raw prompt content. Used as fallback when the server version is unavailable, or as the explicit content when from: 'explicit'.
variables
Record<string, string>
Template variables to interpolate into the prompt. Variables are replaced using {{variable}} syntax in the prompt content.
from
'latest' | 'explicit' | string
Version control mode:
  • 'latest': Fetch the latest version from the server (fails if none exists)
  • 'explicit': Always use the provided content parameter (bypasses auto-optimization)
  • '<hash>': Fetch a specific version by its 64-character SHA-256 content hash
Default: 'latest'

Prompt Properties

content
string
required
The prompt content with variables interpolated.
version
number | null
The version number of the prompt.
versionId
string | null
The unique identifier for this prompt version.
taskId
string | null
The task UUID from the backend.
promptSlug
string | null
The prompt slug identifier.
tag
string | null
Optional tag associated with this prompt version.
isLatest
boolean
required
Whether this is the latest version of the prompt.
model
string | null
The model ID associated with this prompt.
contentHash
string | null
SHA-256 hash of the prompt content for content-addressable versioning.
metadata
Record<string, unknown>
required
Arbitrary metadata associated with the prompt.
source
'server' | 'fallback'
required
Indicates whether the prompt was fetched from the server or is using fallback content.

Usage Examples

Basic Prompt Fetch

import { ze } from 'zeroeval';

const prompt = await ze.prompt({
  name: 'customer-support-greeting'
});

console.log(prompt.content);

Prompt with Variables

const prompt = await ze.prompt({
  name: 'personalized-email',
  variables: {
    userName: 'Alice',
    productName: 'ZeroEval',
    discountCode: 'SAVE20'
  }
});

// Prompt content: "Hi {{userName}}, try {{productName}} with code {{discountCode}}!"
// Result: "Hi Alice, try ZeroEval with code SAVE20!"

Explicit Mode (Bypass Versioning)

const prompt = await ze.prompt({
  name: 'test-prompt',
  content: 'This is my explicit prompt content',
  from: 'explicit'
});

// Always uses the provided content, no server fetch

Fallback Content

const prompt = await ze.prompt({
  name: 'chat-greeting',
  content: 'Hello! How can I help you today?',
  from: 'latest'
});

// Tries to fetch from server
// If server is unavailable or prompt doesn't exist, uses fallback content
if (prompt.source === 'fallback') {
  console.log('Using fallback prompt');
}

Fetch Specific Version by Hash

const prompt = await ze.prompt({
  name: 'production-prompt',
  from: 'a3b2c1d4e5f6789012345678901234567890123456789012345678901234abcd'
});

// Fetches the exact version with this content hash

Using Prompt Metadata

const prompt = await ze.prompt({
  name: 'sentiment-analysis',
  variables: { text: 'This product is amazing!' }
});

console.log('Version:', prompt.version);
console.log('Is Latest:', prompt.isLatest);
console.log('Content Hash:', prompt.contentHash);
console.log('Metadata:', prompt.metadata);

Template Variable Interpolation

const prompt = await ze.prompt({
  name: 'sql-query-generator',
  variables: {
    tableName: 'users',
    column: 'email',
    condition: 'active = true'
  }
});

// Prompt template: "SELECT {{column}} FROM {{tableName}} WHERE {{condition}}"
// Result: "SELECT email FROM users WHERE active = true"

Working with Prompt Versions

// Fetch latest version
const latestPrompt = await ze.prompt({
  name: 'classification',
  from: 'latest'
});

// Store the content hash for reproducibility
const hash = latestPrompt.contentHash;

// Later, fetch exact same version
const samePrompt = await ze.prompt({
  name: 'classification',
  from: hash!
});

// Both prompts have identical content
assert(latestPrompt.content === samePrompt.content);

PromptMetadata

Prompt metadata is embedded in decorated prompts using the format:
<zeroeval>{JSON}</zeroeval>{content}
This metadata tracks version information and template variables:
const metadata: PromptMetadata = {
  task: 'customer-greeting',
  variables: { name: 'Alice' },
  prompt_slug: 'greeting-v2',
  prompt_version: 3,
  prompt_version_id: 'version-uuid',
  content_hash: 'sha256-hash'
};

PromptResponse

The PromptResponse type represents the raw response from the ZeroEval backend. The SDK automatically transforms this into the Prompt type for easier consumption. Key fields:
evaluation_type
'binary' | 'scored' | null
The type of evaluation configured for this prompt:
  • 'binary': Pass/fail evaluation
  • 'scored': Numerical scoring
  • null: No evaluation configured
score_min
number | null
Minimum score for scored evaluations.
score_max
number | null
Maximum score for scored evaluations.
pass_threshold
number | null
Threshold score for passing evaluations.
temperature
number | null
Suggested temperature parameter for LLM calls using this prompt.

Build docs developers (and LLMs) love