Skip to main content

Method Signature

async answerStream(
  request: AnswerRequest,
  onChunk: (accumulatedText: string) => void,
  targets?: Targets
): Promise<AnswerResponse | null>
Generate a streaming AI response with brand enrichment. The response text is streamed in real-time through a callback function, with metadata returned after streaming completes.

Parameters

request
AnswerRequest
required
The request payload containing the user’s message and optional configuration
onChunk
(accumulatedText: string) => void
required
Callback function invoked as text streams in. Receives the accumulated text so far (not individual chunks).Important: This callback receives the full accumulated text from the beginning of the response, not just the latest chunk. This makes it easy to update UI by simply setting the element content to the provided text.
targets
Targets
Optional DOM targets for automatic UI updates and impression tracking

Response

Returns a Promise that resolves to AnswerResponse | null after streaming completes.
response
string
required
The complete AI response text with brand enrichment applied.
metadata
AnswerMetadata
required
Metadata about the response and brand enrichment

Examples

Basic Streaming

const metadata = await client.answerStream(
  {
    message: 'What are the best wireless headphones?'
  },
  (accumulatedText) => {
    console.log('Current text:', accumulatedText);
  }
);

if (metadata) {
  console.log('Brand used:', metadata.metadata.brandUsed?.name);
  console.log('Tracking link:', metadata.metadata.link);
}

Update DOM in Real-Time

const responseElement = document.getElementById('response');

const metadata = await client.answerStream(
  {
    message: 'Best laptops for video editing?'
  },
  (accumulatedText) => {
    // Update the DOM with accumulated text as it streams
    if (responseElement) {
      responseElement.textContent = accumulatedText;
    }
  }
);

With Automatic DOM Targets

// Targets will be automatically updated
const metadata = await client.answerStream(
  {
    message: 'Best running shoes for beginners?'
  },
  (accumulatedText) => {
    console.log(`Received ${accumulatedText.length} characters so far`);
  },
  {
    text: 'response-container',
    link: 'brand-link'
  }
);

With Loading States

const loadingElement = document.getElementById('loading');
const responseElement = document.getElementById('response');

loadingElement.style.display = 'block';

try {
  const metadata = await client.answerStream(
    {
      message: 'Explain machine learning',
      model: 'gpt-4-turbo'
    },
    (accumulatedText) => {
      responseElement.textContent = accumulatedText;
    }
  );
  
  console.log('Streaming complete');
} finally {
  loadingElement.style.display = 'none';
}

With Conversation Context

const metadata = await client.answerStream(
  {
    message: 'What about noise cancellation?',
    conversationId: 'conv_456',
    previousMessages: [
      {
        role: 'user',
        content: 'Best wireless headphones?'
      },
      {
        role: 'assistant',
        content: 'For wireless headphones, I recommend...'
      }
    ]
  },
  (accumulatedText) => {
    console.log(accumulatedText);
  }
);

Character-by-Character Animation

let previousLength = 0;

const metadata = await client.answerStream(
  {
    message: 'Tell me about electric cars'
  },
  (accumulatedText) => {
    // Extract only the new characters
    const newChars = accumulatedText.slice(previousLength);
    previousLength = accumulatedText.length;
    
    // Animate new characters
    animateNewText(newChars);
  }
);

Streaming Format

The API streams data in the following format:
  1. Text chunks: Raw text is streamed as it’s generated
  2. Metadata: After all text is sent, metadata is sent as JSON separated by two newlines (\n\n)
This is the streaming response text...

{"response":"This is the streaming response text...","metadata":{...}}
The answerStream() method handles this format automatically, calling onChunk with accumulated text and returning the metadata when complete.

Error Handling

try {
  const metadata = await client.answerStream(
    {
      message: '  ' // Empty message
    },
    (text) => console.log(text)
  );
} catch (error) {
  console.error('Error:', error.message);
  // Output: "Message is required"
}

Network and Timeout Errors

import { NetworkError, TimeoutError } from '@thred/sdk';

try {
  const metadata = await client.answerStream(
    {
      message: 'What are the best cameras?'
    },
    (text) => console.log(text)
  );
} catch (error) {
  if (error instanceof TimeoutError) {
    console.error('Request timed out');
  } else if (error instanceof NetworkError) {
    console.error('Network error:', error.message);
  } else {
    console.error('Streaming error:', error);
  }
}

Stream Interruption

if (!response.body) {
  throw new Error('Response body is null');
}
// This is handled internally by the SDK

How It Works

  1. Validates the message is not empty
  2. Sends a POST request to /v1/answer/stream
  3. Processes the response stream using ReadableStream API
  4. Calls onChunk with accumulated text as data arrives
  5. Parses metadata from the end of the stream
  6. If targets are provided, updates the DOM and registers an impression
  7. Returns the metadata after streaming completes

When to Use

  • Use answerStream() when you want callback-based streaming (good for simple UI updates)
  • Use answerStreamGenerator() when you prefer async/await syntax with for await loops
  • Use answer() when you need the complete response at once (no streaming)

Build docs developers (and LLMs) love