Overview
Theprompt() function provides version-aware prompt management integrated with ZeroEval’s Prompt Library. It enables automatic optimization, content-addressed storage, template variable interpolation, and metadata decoration for your prompts.
Function Signature
Parameters
Configuration object for the prompt request
Return Value
Returns aPromise<string> containing the decorated prompt. The prompt includes embedded metadata in the format:
Version Control Modes
Auto-optimization Mode (Default)
When you providecontent without specifying from, ZeroEval automatically tries to fetch the latest optimized version. If no optimized version exists, it uses your provided content as a fallback.
Explicit Mode
Usefrom: "explicit" to always use your provided content and bypass auto-optimization:
Latest Mode
Usefrom: "latest" to require that an optimized version exists:
Hash Mode
Fetch a specific version by its content hash for reproducible deployments:Template Variables
Use{{variable}} syntax in your prompts for dynamic content interpolation:
Content-Addressed Storage
Prompts are identified by their SHA-256 content hash, enabling:- Deduplication: Identical prompts share the same hash
- Versioning: Each unique content gets a unique hash
- Reproducibility: Reference exact versions via hash
- Caching: Avoid redundant storage and fetches
Usage Examples
Basic Usage with OpenAI
Explicit Mode for Testing
Dynamic Variables
Multi-Prompt Workflow
Streaming with Prompts
Error Handling
The function throws errors in the following cases:- Missing parameters: Neither
contentnorfromis provided - Invalid explicit mode:
from: "explicit"withoutcontent - Prompt not found:
from: "latest"or hash mode when version doesn’t exist - Invalid hash format: Hash is not a 64-character lowercase hex string
- Network errors: Backend is unreachable or returns error status
Metadata Decoration
The returned prompt includes embedded metadata that ZeroEval’s wrappers automatically extract:- Tracking which prompt version generated each completion
- Correlating feedback with specific prompt versions
- Analyzing performance across prompt versions
- Variable interpolation in wrapped LLM clients
See Also
- sendFeedback() - Provide feedback on completions for optimization
- wrap() - Wrap OpenAI/Vercel AI clients for automatic tracing
- Prompt Management Guide - Learn about prompt optimization workflows