Installation
LlamaIndex.TS is designed to work across multiple JavaScript runtime environments. This guide covers installation for all supported runtimes and package managers.
Quick Install
For most Node.js projects, start with:
The llamaindex package provides core functionality. You’ll also need to install provider packages for LLMs, embeddings, and vector stores.
Runtime Requirements
LlamaIndex.TS supports multiple JavaScript runtimes with different requirements:
Runtime Minimum Version Status Node.js >= 20.0.0 ✅ Full Support Deno Latest ✅ Full Support Bun Latest ✅ Full Support Nitro Latest ✅ Full Support Vercel Edge Latest ✅ Limited Support Cloudflare Workers Latest ✅ Limited Support Browser N/A ⚠️ Limited (no AsyncLocalStorage)
Provider Packages
LlamaIndex.TS uses a modular architecture. Install only the providers you need to keep your bundle size small.
LLM Providers
Choose the LLM provider(s) you want to use:
OpenAI
Anthropic
Google Gemini
Other Providers
npm install @llamaindex/openai
Supports GPT-4, GPT-3.5, and OpenAI embeddings. import { openai } from "@llamaindex/openai" ;
const llm = openai ({ model: "gpt-4o" });
npm install @llamaindex/anthropic
Supports Claude 3.5 Sonnet, Opus, and other Claude models. import { claude } from "@llamaindex/anthropic" ;
const llm = claude ({ model: "claude-3-5-sonnet-20241022" });
npm install @llamaindex/gemini
Supports Gemini Pro and other Google models. import { Gemini } from "@llamaindex/gemini" ;
const llm = new Gemini ({ model: "gemini-pro" });
Additional providers are available: # Groq
npm install @llamaindex/groq
# Ollama (local models)
npm install @llamaindex/ollama
# MistralAI
npm install @llamaindex/mistralai
# Fireworks
npm install @llamaindex/fireworks
Vector Store Providers
For production use, integrate with a vector database:
Pinecone
Qdrant
Chroma
Other Stores
npm install @llamaindex/pinecone
import { PineconeVectorStore } from "@llamaindex/pinecone" ;
import { Pinecone } from "@pinecone-database/pinecone" ;
const pinecone = new Pinecone ({ apiKey: process . env . PINECONE_API_KEY });
const pineconeIndex = pinecone . Index ( "my-index" );
const vectorStore = new PineconeVectorStore ({ pineconeIndex });
npm install @llamaindex/qdrant
import { QdrantVectorStore } from "@llamaindex/qdrant" ;
const vectorStore = new QdrantVectorStore ({
url: process . env . QDRANT_URL ,
apiKey: process . env . QDRANT_API_KEY ,
});
npm install @llamaindex/chroma
import { ChromaVectorStore } from "@llamaindex/chroma" ;
const vectorStore = new ChromaVectorStore ({
url: "http://localhost:8000" ,
});
Additional vector stores: # Weaviate
npm install @llamaindex/weaviate
# MongoDB
npm install @llamaindex/mongodb
# PostgreSQL (with pgvector)
npm install @llamaindex/pg
Runtime-Specific Setup
Node.js
Node.js >= 20 is required for full AsyncLocalStorage support.
node --version # Should be >= 20.0.0
Standard installation works out of the box:
npm install llamaindex @llamaindex/openai
Deno
LlamaIndex.TS works with Deno’s npm compatibility:
import { VectorStoreIndex , Document } from "npm:llamaindex" ;
import { openai } from "npm:@llamaindex/openai" ;
// Your code here
Bun
Bun has full support with its fast package manager:
bun add llamaindex @llamaindex/openai
Then use normally:
import { VectorStoreIndex } from "llamaindex" ;
Vercel Edge Runtime
For Vercel Edge Functions, use the edge-compatible entry point:
// This is automatically resolved when running in Vercel Edge
import { VectorStoreIndex } from "llamaindex" ;
export const config = {
runtime: "edge" ,
};
Some features that require file system access may be limited in edge environments. Use remote vector stores for data persistence.
Cloudflare Workers
Cloudflare Workers automatically use the workerd-compatible entry point:
import { VectorStoreIndex } from "llamaindex" ;
export default {
async fetch ( request : Request ) : Promise < Response > {
// Your LlamaIndex code
} ,
} ;
Environment Variables
Set up your API keys and configuration:
# LLM Providers
OPENAI_API_KEY = sk-...
ANTHROPIC_API_KEY = sk-ant-...
GEMINI_API_KEY = ...
# Vector Stores
PINECONE_API_KEY = ...
QDRANT_URL = https://...
QDRANT_API_KEY = ...
# Optional: LlamaCloud
LLAMA_CLOUD_API_KEY = ...
Use a .env file for local development and configure environment variables in your deployment platform for production.
Additional Packages
Workflow and Agents
For building agentic applications:
npm install @llamaindex/workflow
File Readers
For reading different file formats:
# PDF support
npm install @llamaindex/pdf-reader
# DOCX support
npm install @llamaindex/docx-reader
# Notion integration
npm install @llamaindex/notion
# Discord integration
npm install @llamaindex/discord
LlamaCloud
For managed RAG with LlamaCloud:
npm install @llamaindex/cloud
Verifying Installation
Create a simple test file to verify everything works:
import { Document } from "llamaindex" ;
const doc = new Document ({ text: "Hello, LlamaIndex!" });
console . log ( "Installation successful!" , doc . getText ());
Run it:
If you see “Installation successful!”, you’re ready to go!
Troubleshooting
Module Resolution Errors
If you encounter module resolution errors, ensure you’re using Node.js >= 20:
nvm install 20
nvm use 20
TypeScript Configuration
Add these settings to your tsconfig.json:
{
"compilerOptions" : {
"module" : "ESNext" ,
"moduleResolution" : "bundler" ,
"target" : "ES2022" ,
"lib" : [ "ES2022" ]
}
}
API Key Issues
If your API key isn’t being recognized:
Ensure .env is in your project root
Install dotenv and load it:
Verify the key is set:
console . log ( process . env . OPENAI_API_KEY );
Next Steps
Quickstart Build your first RAG application
Core Concepts Learn about Documents, Nodes, and Indices