Execute text generation with AI providers. Returns a promise that resolves to the final generated text.
Method Signature
execute (
params : IAIProvidersExecuteParams &
({ onProgress? : ( chunk : string , accumulatedText : string ) => void } |
{ abortController? : AbortController })
): Promise < string >
// Legacy usage (deprecated)
execute (
params : IAIProvidersExecuteParams &
{ onProgress? : undefined ; abortController ?: undefined }
): Promise < IChunkHandler >
Parameters
The AI provider to use for text generation. Must be from aiProviders.providers array.
Simple text prompt for generation. Use either prompt or messages, not both. prompt : "What is the capital of Great Britain?"
Array of chat messages with roles. Use either messages or prompt, not both. messages : [
{ role: "system" , content: "You are a helpful assistant." },
{ role: "user" , content: "What is the capital of Great Britain?" }
]
Show Message format with images
Messages can include images using content blocks: messages : [
{ role: "system" , content: "You are a helpful image analyst." },
{
role: "user" ,
content: [
{ type: "text" , text: "Describe what you see in this image" },
{
type: "image_url" ,
image_url: { url: "data:image/jpeg;base64,/9j/4AAQSkZ..." }
}
]
}
]
System prompt when using simple prompt format. Ignored if messages is provided. systemPrompt : "You are a helpful geography assistant."
Array of image URLs or base64-encoded images (legacy format).
Additional generation options to pass to the provider. Controls randomness (0.0 to 2.0). Lower values are more deterministic.
Maximum number of tokens to generate.
Nucleus sampling parameter (0.0 to 1.0).
options.frequency_penalty
Penalize frequent tokens (-2.0 to 2.0).
Penalize repeated tokens (-2.0 to 2.0).
Stop sequences that halt generation.
onProgress
(chunk: string, accumulatedText: string) => void
Optional streaming callback that fires for each generated chunk.
chunk: The new text fragment received
accumulatedText: All text generated so far
onProgress : ( chunk , accumulatedText ) => {
console . log ( 'Chunk:' , chunk );
console . log ( 'Total:' , accumulatedText );
}
Optional AbortController to cancel the generation. Calling abort() will reject the promise with Error('Aborted'). const controller = new AbortController ();
abortController : controller
Return Type
When onProgress or abortController is provided, returns a promise that resolves to the complete generated text.
Success: Promise resolves with the full text
Failure/abort: Promise rejects with an error
Legacy return type when neither onProgress nor abortController is provided. Use the promise-based API instead.
Examples
Basic Text Generation
const fullText = await aiProviders . execute ({
provider: aiProviders . providers [ 0 ],
prompt: "What is the capital of Great Britain?"
});
console . log ( 'Generated:' , fullText );
Streaming with Progress Callback
const fullText = await aiProviders . execute ({
provider: aiProviders . providers [ 0 ],
prompt: "What is the capital of Great Britain?" ,
onProgress : ( chunk , accumulatedText ) => {
console . log ( 'Current text:' , accumulatedText );
}
});
console . log ( 'Final result:' , fullText );
const response = await aiProviders . execute ({
provider: aiProviders . providers [ 0 ],
messages: [
{ role: "system" , content: "You are a helpful geography assistant." },
{ role: "user" , content: "What is the capital of Great Britain?" }
],
onProgress : ( _chunk , text ) => console . log ( text )
});
Image Analysis
const analysis = await aiProviders . execute ({
provider: aiProviders . providers [ 0 ],
messages: [
{ role: "system" , content: "You are a helpful image analyst." },
{
role: "user" ,
content: [
{ type: "text" , text: "Describe what you see in this image" },
{
type: "image_url" ,
image_url: { url: "data:image/jpeg;base64,/9j/4AAQSkZ..." }
}
]
}
],
onProgress : ( _c , t ) => console . log ( t )
});
Cancellation with AbortController
const abortController = new AbortController ();
try {
const final = await aiProviders . execute ({
provider: aiProviders . providers [ 0 ],
prompt: "Stream something..." ,
abortController ,
onProgress : ( _c , t ) => {
console . log ( t );
if ( t . length > 50 ) {
abortController . abort ();
}
}
});
console . log ( 'Completed:' , final );
} catch ( e ) {
if (( e as Error ). message === 'Aborted' ) {
console . log ( 'Generation aborted intentionally' );
} else {
console . error ( e );
}
}
With Generation Options
const response = await aiProviders . execute ({
provider: aiProviders . providers [ 0 ],
prompt: "Write a creative story" ,
options: {
temperature: 0.8 ,
max_tokens: 500 ,
top_p: 0.9
},
onProgress : ( chunk , full ) => {
console . log ( `Generated ${ full . length } characters so far` );
}
});
Reasoning Chunks
Some OpenAI-compatible providers (e.g., OpenRouter) stream delta.reasoning chunks. These reasoning chunks are included in the text output wrapped in <think>...</think> tags.
Error Handling
try {
await aiProviders . execute ({
provider: aiProviders . providers [ 0 ],
prompt: "What is the capital of Great Britain?" ,
onProgress : ( c , full ) => { /* optional */ }
});
} catch ( error ) {
console . error ( 'Generation failed:' , error );
}
Migration Notes
SDK 1.5.0 (Service API v3) changed execute() to return a Promise<string> with inline streaming via onProgress and cancellation via AbortController. The old chainable IChunkHandler object is deprecated and only returned when neither onProgress nor abortController are passed. See the migration guide for details.