Learn how Tafrigh manages multiple Wit.ai API keys for parallel transcription
Tafrigh uses a round-robin rotation strategy to distribute API requests evenly across multiple Wit.ai keys. This enables parallel processing and helps avoid rate limits.
// Given keys: ['keyA', 'keyB', 'keyC']getNextApiKey(); // Returns 'keyA', index moves to 1getNextApiKey(); // Returns 'keyB', index moves to 2getNextApiKey(); // Returns 'keyC', index wraps to 0getNextApiKey(); // Returns 'keyA' again
The concurrency option limits how many chunks process simultaneously. Tafrigh calculates the effective concurrency based on available keys (see /home/daytona/workspace/source/src/transcriber.ts:192-193):
Configuration: 5 keys, concurrency: 2Result: Only 2 workers run at once, using 2 of the 5 keys. The rotation continues across all 5 keys as chunks complete.
Tafrigh validates that at least one key exists before processing:
const validateApiKeys = (): void => { if (getApiKeysCount() === 0) { logger.error('At least one Wit.ai API key is required.'); throw new Error('Empty wit.ai API keys'); }};
This runs automatically when you call init() or getNextApiKey().
Language consistency: All API keys must use the same language setting in your Wit.ai app. Mixing English and Arabic keys will produce inconsistent transcriptions.
Optimal key count: Match the number of keys to your typical chunk count. For example, if files average 10 chunks and you want full parallelism, use 10 keys.
Rate limit handling: If you encounter rate limit errors even with rotation, reduce concurrency or add exponential backoff by increasing the retries parameter.
import os from 'node:os';import { init, transcribe } from 'tafrigh';const cpuCount = os.cpus().length;const apiKeys = ['key1', 'key2', 'key3', 'key4', 'key5'];init({ apiKeys });// Use at most half the CPU cores or all available keys, whichever is lowerconst maxConcurrency = Math.min(Math.floor(cpuCount / 2), apiKeys.length);const transcript = await transcribe('audio.mp3', { concurrency: maxConcurrency});console.log(`Processed with ${maxConcurrency} workers`);