Skip to main content
This guide covers common issues you might encounter when using LlamaIndex.TS and how to resolve them.

Installation Issues

Problem:
Error: Cannot find module 'llamaindex'
Solutions:
  1. Verify installation:
npm list llamaindex
  1. Reinstall dependencies:
rm -rf node_modules package-lock.json
npm install
  1. Check your Node.js version:
node -v  # Should be >= 18.0.0
  1. For TypeScript projects, ensure moduleResolution is set correctly:
{
  "compilerOptions": {
    "moduleResolution": "bundler"
  }
}
Problem:
npm WARN ERESOLVE overriding peer dependency
Solutions:
  1. For npm v7+, peer dependencies are installed automatically. You can ignore warnings if your app works.
  2. Install missing peer dependencies manually:
npm install openai  # If using @llamaindex/openai
npm install @pinecone-database/pinecone  # If using @llamaindex/pinecone
  1. Use --legacy-peer-deps if necessary:
npm install --legacy-peer-deps
Problem:
ERR_PNPM_PEER_DEP_ISSUES
Solutions:
  1. Use the recommended pnpm version:
npm install -g [email protected]
  1. Configure pnpm to handle peer dependencies:
pnpm config set auto-install-peers true
  1. Install with:
pnpm install --shamefully-hoist

Runtime Errors

Problem:
OpenAI API error: 401 Unauthorized
OpenAI API error: 429 Rate limit exceeded
OpenAI API error: 500 Internal server error
Solutions:401 Unauthorized:
import { Settings } from "llamaindex";
import { OpenAI } from "@llamaindex/openai";

// Make sure API key is set
Settings.llm = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});
429 Rate Limit:
Settings.llm = new OpenAI({
  maxRetries: 3,
  timeout: 60000,
});
500 Server Error:
  • Retry the request
  • Check OpenAI status page
  • Implement exponential backoff
Problem:
TypeError: Cannot read property 'embedModel' of undefined
Solution:Set the embedModel before using it:
import { Settings } from "llamaindex";
import { OpenAIEmbedding } from "@llamaindex/openai";

// Set embedModel before creating index
Settings.embedModel = new OpenAIEmbedding();

const index = await VectorStoreIndex.fromDocuments(documents);
Problem:
JavaScript heap out of memory
FATAL ERROR: Reached heap limit
Solutions:
  1. Increase Node.js memory limit:
node --max-old-space-size=4096 your-script.js
  1. Process documents in batches:
const batchSize = 10;
for (let i = 0; i < documents.length; i += batchSize) {
  const batch = documents.slice(i, i + batchSize);
  await index.insert(batch);
}
  1. Use streaming for large files:
import { SimpleDirectoryReader } from "llamaindex";

const reader = new SimpleDirectoryReader();
// Process files one at a time
for await (const doc of reader.loadDataAsyncIterator({ directoryPath: "./data" })) {
  await index.insert(doc);
}
Problem:
Error: AsyncLocalStorage is not available in this environment
Solution:Use the edge-compatible entry point:
// For Vercel Edge Runtime
import { VectorStoreIndex } from "llamaindex/edge";

// Or configure your bundler
// next.config.js
module.exports = {
  experimental: {
    serverComponentsExternalPackages: ["llamaindex"],
  },
};

Configuration Problems

Problem:
API key is undefined
Solutions:
  1. Install and configure dotenv:
npm install dotenv
import "dotenv/config";
import { OpenAI } from "@llamaindex/openai";

const llm = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
  1. For Next.js, use .env.local:
# .env.local
OPENAI_API_KEY=sk-...
  1. Verify the environment variable is set:
if (!process.env.OPENAI_API_KEY) {
  throw new Error("OPENAI_API_KEY is not set");
}
Problem:
Failed to connect to Pinecone
Qdrant connection timeout
Solutions:
  1. Verify credentials:
import { PineconeVectorStore } from "@llamaindex/pinecone";

const vectorStore = new PineconeVectorStore({
  indexName: "my-index",
  apiKey: process.env.PINECONE_API_KEY,
  environment: process.env.PINECONE_ENVIRONMENT,
});
  1. Check network connectivity:
curl https://api.pinecone.io/
  1. Increase timeout:
const vectorStore = new PineconeVectorStore({
  indexName: "my-index",
  timeout: 60000, // 60 seconds
});
Problem:
Documents are not being split correctly
Chunks are too large/small
Solution:Configure the node parser:
import { Settings } from "llamaindex";
import { SentenceSplitter } from "llamaindex";

Settings.nodeParser = new SentenceSplitter({
  chunkSize: 1024,        // Adjust based on your needs
  chunkOverlap: 200,      // 20% overlap is common
});

const index = await VectorStoreIndex.fromDocuments(documents);

Build and Bundling Issues

Problem:
Module not found: Can't resolve 'fs'
Module not found: Can't resolve 'async_hooks'
Solution:Configure Next.js to handle LlamaIndex properly:
// next.config.js
module.exports = {
  experimental: {
    serverComponentsExternalPackages: [
      "llamaindex",
      "@llamaindex/core",
      "@llamaindex/openai",
    ],
  },
  webpack: (config, { isServer }) => {
    if (!isServer) {
      config.resolve.fallback = {
        ...config.resolve.fallback,
        fs: false,
        net: false,
        tls: false,
      };
    }
    return config;
  },
};
Problem:
Failed to resolve import "fs"
Solution:Configure Vite:
// vite.config.js
import { defineConfig } from "vite";

export default defineConfig({
  resolve: {
    alias: {
      fs: false,
      path: false,
    },
  },
  optimizeDeps: {
    exclude: ["llamaindex"],
  },
});
Problem:
TS2307: Cannot find module 'llamaindex'
TS7016: Could not find a declaration file
Solutions:
  1. Ensure TypeScript configuration:
{
  "compilerOptions": {
    "moduleResolution": "bundler",
    "module": "ESNext",
    "target": "ES2020",
    "lib": ["ES2020"],
    "skipLibCheck": true
  }
}
  1. Reinstall with types:
npm install --save-dev @types/node

Performance Issues

Problem: Embedding generation takes too long.Solutions:
  1. Use batch embedding:
import { OpenAIEmbedding } from "@llamaindex/openai";

const embedModel = new OpenAIEmbedding({
  batchSize: 100, // Process multiple texts at once
});
  1. Use a faster model:
Settings.embedModel = new OpenAIEmbedding({
  model: "text-embedding-3-small", // Faster than text-embedding-3-large
});
  1. Cache embeddings:
// Store embeddings in a vector store for reuse
const vectorStore = new QdrantVectorStore({
  url: process.env.QDRANT_URL,
});
Problem: OpenAI API costs are too high.Solutions:
  1. Use smaller models:
Settings.llm = new OpenAI({ model: "gpt-3.5-turbo" });
Settings.embedModel = new OpenAIEmbedding({ model: "text-embedding-3-small" });
  1. Reduce chunk size to minimize embeddings:
Settings.chunkSize = 512; // Smaller chunks = fewer tokens
  1. Use local models:
import { Ollama } from "@llamaindex/ollama";

Settings.llm = new Ollama({ model: "llama3" });

Getting More Help

If your issue isn’t covered here:

Discord Community

Get real-time help from the community

GitHub Issues

Report bugs or request features

GitHub Discussions

Ask questions and share solutions

FAQ

Frequently asked questions

Debug Mode

Enable debug logging to troubleshoot issues:
import { Settings } from "llamaindex";

// Enable verbose logging
Settings.debug = true;

// Your code here
This will show detailed logs about:
  • API calls
  • Document processing
  • Embedding generation
  • Query execution

Build docs developers (and LLMs) love