General Questions
What is LlamaIndex.TS?
What is LlamaIndex.TS?
- Load and process data from various sources
- Create vector embeddings and indices
- Build RAG (Retrieval Augmented Generation) applications
- Integrate with multiple LLM providers
- Create chat engines and query engines
- Build agentic workflows
What's the difference between LlamaIndex.TS and LlamaIndex Python?
What's the difference between LlamaIndex.TS and LlamaIndex Python?
- Same core abstractions (indices, query engines, chat engines)
- Similar API design and patterns
- Support for the same LLM providers
- RAG and agent capabilities
- Language: TypeScript vs Python
- Package structure: Modular npm packages vs Python namespace packages
- Runtime support: Multi-runtime JS vs Python only
- Feature parity: Python has more features currently
Do I need to know Python to use LlamaIndex.TS?
Do I need to know Python to use LlamaIndex.TS?
Which JavaScript runtimes are supported?
Which JavaScript runtimes are supported?
- Node.js >= 18.0.0 ✅
- Deno ✅
- Bun ✅
- Nitro ✅
- Vercel Edge Runtime ✅ (limited file system access)
- Cloudflare Workers ✅ (limited file system access)
- Browser ❌ (due to lack of AsyncLocalStorage-like APIs)
Can I use LlamaIndex.TS in the browser?
Can I use LlamaIndex.TS in the browser?
Installation & Setup
How do I install LlamaIndex.TS?
How do I install LlamaIndex.TS?
Do I need an OpenAI API key?
Do I need an OpenAI API key?
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude)
- Google (Gemini)
- Ollama (Local models)
- Groq, Mistral, Together AI, and more
What are the minimum requirements?
What are the minimum requirements?
- Node.js >= 18.0.0 (or another supported runtime)
- npm, pnpm, or yarn
- TypeScript for type safety
- An LLM provider API key (OpenAI, Anthropic, etc.) or local Ollama setup
Can I use it with Next.js, Remix, or other frameworks?
Can I use it with Next.js, Remix, or other frameworks?
- Next.js (App Router and Pages Router)
- Remix
- SvelteKit
- Nuxt
- Astro
- Express
- And more!
withLlamaIndex helper:Usage Questions
How do I load my own data?
How do I load my own data?
How do I query my data?
How do I query my data?
How do I build a chatbot?
How do I build a chatbot?
How do I use a different LLM provider?
How do I use a different LLM provider?
Settings.llm:How do I use a vector database?
How do I use a vector database?
- Pinecone, Qdrant, Chroma, Weaviate
- MongoDB, PostgreSQL (pgvector)
- Supabase, Milvus, Astra
- And more!
How do I stream responses?
How do I stream responses?
stream: true in your query:Troubleshooting
I'm getting 'Module not found' errors
I'm getting 'Module not found' errors
- Missing installation:
- TypeScript configuration:
- Build issues:
Why am I getting API rate limit errors?
Why am I getting API rate limit errors?
- Add retry logic:
- Use a smaller model:
- Process in batches:
My embeddings are too slow. How can I speed them up?
My embeddings are too slow. How can I speed them up?
- Use batch embedding:
- Use a faster model:
- Cache embeddings in a vector store.
How do I debug issues?
How do I debug issues?
- API calls
- Document processing
- Embedding generation
- Query execution
Advanced Topics
Can I use custom embedding models?
Can I use custom embedding models?
BaseEmbedding class:How do I build multi-agent systems?
How do I build multi-agent systems?
@llamaindex/workflow package:Can I use function calling with tools?
Can I use function calling with tools?
How do I handle large documents?
How do I handle large documents?
- Chunk documents:
- Process in batches:
- Increase Node.js memory:
Cost & Performance
How much does it cost to use LlamaIndex.TS?
How much does it cost to use LlamaIndex.TS?
- OpenAI: ~$0.03 per 1K tokens (GPT-4)
- Anthropic: ~$0.015 per 1K tokens (Claude)
- Or use free local models with Ollama
- OpenAI: ~$0.0001 per 1K tokens
- Varies by provider (some have free tiers)
- Use smaller models (gpt-3.5-turbo vs gpt-4)
- Reduce chunk size
- Cache embeddings
- Use local models
Can I use local models to avoid API costs?
Can I use local models to avoid API costs?
Contributing & Community
How can I contribute to LlamaIndex.TS?
How can I contribute to LlamaIndex.TS?
- Fix bugs or add features
- Improve documentation
- Add examples
- Help others in Discord
- Report issues
Where can I get help?
Where can I get help?
- Discord - Real-time chat
- GitHub Discussions - Q&A
- GitHub Issues - Bug reports