Method Signature
async answerStream (
request : AnswerRequest ,
onChunk : ( accumulatedText : string ) => void ,
targets ?: Targets
): Promise < AnswerResponse | null >
Generate a streaming AI response with brand enrichment. The response text is streamed in real-time through a callback function, with metadata returned after streaming completes.
Parameters
The request payload containing the userβs message and optional configuration Show AnswerRequest properties
The userβs message or question to process. Cannot be empty or whitespace-only.
model
'gpt-4' | 'gpt-4-turbo' | 'gpt-3.5-turbo'
The OpenAI model to use for this request. Overrides the clientβs defaultModel if specified.
Maximum number of tokens to generate. Note: This may be overridden by brand-specific settings.
Sampling temperature between 0 and 2. Higher values make output more random. Note: This may be overridden by brand-specific settings.
Additional instructions to guide the AIβs response style or content.
Unique ID to track conversation context across multiple requests.
Array of previous messages in the conversation for context. role
'user' | 'assistant'
required
The role of the message sender.
The content of the message.
onChunk
(accumulatedText: string) => void
required
Callback function invoked as text streams in. Receives the accumulated text so far (not individual chunks). Important : This callback receives the full accumulated text from the beginning of the response, not just the latest chunk. This makes it easy to update UI by simply setting the element content to the provided text.
Optional DOM targets for automatic UI updates and impression tracking Target element for the response text. Can be an element ID string or HTMLElement reference.
Target element for the brand link. Can be an element ID string or HTMLElement reference.
Response
Returns a Promise that resolves to AnswerResponse | null after streaming completes.
The complete AI response text with brand enrichment applied.
Metadata about the response and brand enrichment Show AnswerMetadata properties
Information about the brand used for enrichment Show BrandInfo properties
Unique identifier for the brand
Domain of the brandβs website
Affiliate or tracking link for the brand
Tracking code for impression registration
Similarity score (0-1) indicating how well the message matched the brandβs triggers
List of trigger phrases configured for the matched brand
List of trigger phrases that matched in the userβs message
Examples
Basic Streaming
const metadata = await client . answerStream (
{
message: 'What are the best wireless headphones?'
},
( accumulatedText ) => {
console . log ( 'Current text:' , accumulatedText );
}
);
if ( metadata ) {
console . log ( 'Brand used:' , metadata . metadata . brandUsed ?. name );
console . log ( 'Tracking link:' , metadata . metadata . link );
}
Update DOM in Real-Time
const responseElement = document . getElementById ( 'response' );
const metadata = await client . answerStream (
{
message: 'Best laptops for video editing?'
},
( accumulatedText ) => {
// Update the DOM with accumulated text as it streams
if ( responseElement ) {
responseElement . textContent = accumulatedText ;
}
}
);
With Automatic DOM Targets
// Targets will be automatically updated
const metadata = await client . answerStream (
{
message: 'Best running shoes for beginners?'
},
( accumulatedText ) => {
console . log ( `Received ${ accumulatedText . length } characters so far` );
},
{
text: 'response-container' ,
link: 'brand-link'
}
);
With Loading States
const loadingElement = document . getElementById ( 'loading' );
const responseElement = document . getElementById ( 'response' );
loadingElement . style . display = 'block' ;
try {
const metadata = await client . answerStream (
{
message: 'Explain machine learning' ,
model: 'gpt-4-turbo'
},
( accumulatedText ) => {
responseElement . textContent = accumulatedText ;
}
);
console . log ( 'Streaming complete' );
} finally {
loadingElement . style . display = 'none' ;
}
With Conversation Context
const metadata = await client . answerStream (
{
message: 'What about noise cancellation?' ,
conversationId: 'conv_456' ,
previousMessages: [
{
role: 'user' ,
content: 'Best wireless headphones?'
},
{
role: 'assistant' ,
content: 'For wireless headphones, I recommend...'
}
]
},
( accumulatedText ) => {
console . log ( accumulatedText );
}
);
Character-by-Character Animation
let previousLength = 0 ;
const metadata = await client . answerStream (
{
message: 'Tell me about electric cars'
},
( accumulatedText ) => {
// Extract only the new characters
const newChars = accumulatedText . slice ( previousLength );
previousLength = accumulatedText . length ;
// Animate new characters
animateNewText ( newChars );
}
);
The API streams data in the following format:
Text chunks : Raw text is streamed as itβs generated
Metadata : After all text is sent, metadata is sent as JSON separated by two newlines (\n\n)
This is the streaming response text...
{"response":"This is the streaming response text...","metadata":{...}}
The answerStream() method handles this format automatically, calling onChunk with accumulated text and returning the metadata when complete.
Error Handling
try {
const metadata = await client . answerStream (
{
message: ' ' // Empty message
},
( text ) => console . log ( text )
);
} catch ( error ) {
console . error ( 'Error:' , error . message );
// Output: "Message is required"
}
Network and Timeout Errors
import { NetworkError , TimeoutError } from '@thred/sdk' ;
try {
const metadata = await client . answerStream (
{
message: 'What are the best cameras?'
},
( text ) => console . log ( text )
);
} catch ( error ) {
if ( error instanceof TimeoutError ) {
console . error ( 'Request timed out' );
} else if ( error instanceof NetworkError ) {
console . error ( 'Network error:' , error . message );
} else {
console . error ( 'Streaming error:' , error );
}
}
Stream Interruption
if ( ! response . body ) {
throw new Error ( 'Response body is null' );
}
// This is handled internally by the SDK
How It Works
Validates the message is not empty
Sends a POST request to /v1/answer/stream
Processes the response stream using ReadableStream API
Calls onChunk with accumulated text as data arrives
Parses metadata from the end of the stream
If targets are provided, updates the DOM and registers an impression
Returns the metadata after streaming completes
When to Use
Use answerStream() when you want callback-based streaming (good for simple UI updates)
Use answerStreamGenerator() when you prefer async/await syntax with for await loops
Use answer() when you need the complete response at once (no streaming)