Skip to main content
The LlamaIndex provider wraps Composio tools for use with LlamaIndex.TS.

Installation

npm install @composio/llamaindex llamaindex

Quick Start

import { Composio } from '@composio/core';
import { LlamaindexProvider } from '@composio/llamaindex';
import { OpenAI, OpenAIAgent } from 'llamaindex';

const composio = new Composio({
  apiKey: 'your-composio-key',
  provider: new LlamaindexProvider()
});

const tools = await composio.tools.get('default', {
  toolkits: ['github']
});

const llm = new OpenAI({ model: 'gpt-4', apiKey: 'your-openai-key' });

const agent = new OpenAIAgent({ llm, tools });

const response = await agent.chat({ message: 'Create a GitHub issue' });
console.log(response.response);

Complete Example

import { Composio } from '@composio/core';
import { LlamaindexProvider } from '@composio/llamaindex';
import { OpenAI, OpenAIAgent, Settings } from 'llamaindex';

const composio = new Composio({
  apiKey: process.env.COMPOSIO_API_KEY!,
  provider: new LlamaindexProvider()
});

async function runAgent(userMessage: string) {
  const tools = await composio.tools.get('default', {
    toolkits: ['github']
  });

  const llm = new OpenAI({
    model: 'gpt-4',
    apiKey: process.env.OPENAI_API_KEY!
  });

  Settings.llm = llm;

  const agent = new OpenAIAgent({
    llm,
    tools,
    verbose: true
  });

  const response = await agent.chat({ message: userMessage });
  return response.response;
}

const answer = await runAgent('Create a GitHub issue titled "Bug Report"');
console.log(answer);

With Chat History

import { ChatMemoryBuffer } from 'llamaindex';

const memory = new ChatMemoryBuffer({ tokenLimit: 4096 });

const agent = new OpenAIAgent({
  llm,
  tools,
  chatHistory: memory
});

const response1 = await agent.chat({ message: 'My name is Alice' });
const response2 = await agent.chat({ message: 'What is my name?' });

console.log(response2.response); // Uses chat history

Streaming

const agent = new OpenAIAgent({ llm, tools });

const stream = await agent.chat({
  message: 'Create an issue',
  stream: true
});

for await (const chunk of stream) {
  process.stdout.write(chunk.delta);
}

With RAG

import { VectorStoreIndex, SimpleDirectoryReader } from 'llamaindex';

const tools = await composio.tools.get('default', {
  toolkits: ['github']
});

// Load documents
const documents = await new SimpleDirectoryReader().loadData('./docs');

// Create index
const index = await VectorStoreIndex.fromDocuments(documents);

// Create query engine as tool
const queryEngine = index.asQueryEngine();

const agent = new OpenAIAgent({
  llm,
  tools: [
    ...tools,
    queryEngine.asTool({
      name: 'search_docs',
      description: 'Search documentation'
    })
  ]
});

Tool Format

import { tool as createLlamaindexTool } from 'llamaindex';

// Tools are wrapped using LlamaIndex's tool() function
const tool = createLlamaindexTool({
  name: 'GITHUB_CREATE_ISSUE',
  description: 'Create a new GitHub issue',
  parameters: zodSchema,
  execute: async (input) => {
    return JSON.stringify(result);
  }
});

Best Practices

  1. Settings: Configure global Settings for LlamaIndex
  2. Verbose Mode: Enable for debugging
  3. Memory: Use ChatMemoryBuffer for conversations
  4. RAG Integration: Combine tools with retrieval
  5. Error Handling: Wrap agent calls properly

TypeScript Types

import type { tool } from 'llamaindex';

// Tool type
type LlamaindexTool = ReturnType<typeof tool>;

// Tool collection
type LlamaindexToolCollection = LlamaindexTool[];

Next Steps

LangChain Provider

Alternative agent framework

Tools API

Learn about tools

Connected Accounts

Set up authentication

Examples

View examples

Build docs developers (and LLMs) love