sequential() strategy processes document chunks one at a time, passing the accumulated results to each subsequent extraction. This allows the model to build context as it processes the document.
Usage
Configuration
The AI SDK language model to use for extraction.
Maximum tokens per chunk. Documents are split into batches that fit within this limit.
Maximum number of images per chunk. Useful for controlling vision API costs.
Additional instructions to guide the model’s output format or behavior.
Custom retry executor function. Defaults to
runWithRetries.Enable strict mode for structured output validation. Defaults to
false.When to use
- You have large documents that exceed context limits
- Sequential context is important (e.g., narratives, meeting minutes)
- You want to avoid a separate merge step
- You can tolerate slower processing than parallel strategies
Trade-offs
Advantages:- No separate merge step needed
- Model sees accumulated context from previous chunks
- Lower token usage than parallel + merge
- Deterministic ordering
- Slower than parallel strategies (no concurrency)
- Later chunks see accumulated data, increasing context size
- Cannot leverage parallelization
Performance characteristics
The strategy estimatesbatches.length + 2 steps:
- Prepare
- Extract from batch 1 through N (sequential)
- Complete