Skip to main content

Overview

AgentOS provides automated migration from LangChain Python projects. The migration tool scans your Python files, detects LangChain patterns, and converts them to AgentOS configuration.
Detection: The migration scanner looks for LangChain imports and patterns like ChatOpenAI, LLMChain, AgentExecutor, Tool, and more.

What Gets Migrated

The LangChain migration tool detects and converts:
LangChain ComponentAgentOS EquivalentStatus
ChatOpenAI, ChatAnthropicAgent with model config✅ Full
AgentExecutor, create_react_agentAgent with ReAct loop✅ Full
Tool, StructuredToolIntegration or custom tool✅ Full
LLMChain, SequentialChainWorkflow🖄️ Partial
ConversationBufferMemoryAgent memory✅ Full
VectorStoreRetrieverMemory with embeddings🖄️ Partial
OpenAIEmbeddingsEmbedding worker✅ Full

Quick Migration

1

Scan Your Project

cd /path/to/langchain-project
agentos migrate scan
Output:
{
  "frameworks": [
    {
      "framework": "langchain",
      "detected": true,
      "configPath": "./agent.py",
      "version": "0.1.0",
      "migratable": true
    }
  ]
}
2

Preview Migration

agentos migrate langchain --dry-run
Shows what will be created without making changes.
3

Execute Migration

agentos migrate langchain
Creates:
  • agents/*/agent.toml - Agent configurations
  • integrations/*.toml - Custom tools
  • workflows/*.toml - Chain definitions
  • data/migrations/langchain-{timestamp}.json - Migration report
4

Review & Test

# View migration report
agentos migrate report

# Test migrated agent
agentos agent list | grep langchain
agentos chat my_agent_llm_0

Migration Examples

Example 1: Simple LLM Agent

agent.py
from langchain.chat_models import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool

# Initialize LLM
llm = ChatOpenAI(
    model_name="gpt-4",
    temperature=0.7
)

# Define tools
tools = [
    Tool(
        name="web_search",
        func=lambda x: search_web(x),
        description="Search the web for information"
    )
]

# Create agent
agent = create_react_agent(llm, tools)
executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True
)

# Run
result = executor.invoke({"input": "What's the weather?"})

Example 2: Agent with Custom Tools

tools_agent.py
from langchain.chat_models import ChatAnthropic
from langchain.agents import initialize_agent
from langchain.tools import StructuredTool

def fetch_data(url: str, max_results: int = 10):
    """Fetch data from a URL."""
    return requests.get(url).json()

def process_data(data: dict):
    """Process fetched data."""
    return {"processed": True, "count": len(data)}

llm = ChatAnthropic(model="claude-3-sonnet")

tools = [
    StructuredTool.from_function(
        func=fetch_data,
        name="fetch_data",
        description="Fetch data from a URL"
    ),
    StructuredTool.from_function(
        func=process_data,
        name="process_data",
        description="Process fetched data"
    )
]

agent = initialize_agent(
    tools,
    llm,
    agent="zero-shot-react-description",
    verbose=True
)

Example 3: Chain as Workflow

chain.py
from langchain.chains import LLMChain, SequentialChain
from langchain.prompts import PromptTemplate
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(model="gpt-4")

# Chain 1: Generate outline
outline_prompt = PromptTemplate(
    input_variables=["topic"],
    template="Create an outline for: {topic}"
)
outline_chain = LLMChain(llm=llm, prompt=outline_prompt, output_key="outline")

# Chain 2: Write content
content_prompt = PromptTemplate(
    input_variables=["outline"],
    template="Write content based on: {outline}"
)
content_chain = LLMChain(llm=llm, prompt=content_prompt, output_key="content")

# Sequential chain
full_chain = SequentialChain(
    chains=[outline_chain, content_chain],
    input_variables=["topic"],
    output_variables=["outline", "content"]
)

result = full_chain({"topic": "AI agents"})

Example 4: Memory with Retrieval

memory_agent.py
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma

llm = ChatOpenAI(model="gpt-4")
memory = ConversationBufferMemory()
embeddings = OpenAIEmbeddings()

vectorstore = Chroma(
    collection_name="conversations",
    embedding_function=embeddings
)

conversation = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=True
)

response = conversation.predict(input="Hello, I'm learning about agents")

Pattern Detection

The migration tool detects these LangChain patterns:
# LLM initialization
ChatOpenAI(...)                    → agent with OpenAI model
ChatAnthropic(...)                 → agent with Anthropic model
ChatGoogleGenerativeAI(...)        → agent with Google model
AzureChatOpenAI(...)               → agent with Azure OpenAI

# Agent creation
create_react_agent(...)            → agent config
create_openai_tools_agent(...)     → agent config
AgentExecutor(...)                 → agent config
initialize_agent(...)              → agent config

# Tools
Tool(...)                          → integration
StructuredTool(...)                → integration
@tool decorator                    → integration

# Chains
LLMChain(...)                      → workflow step
SequentialChain(...)               → workflow
RouterChain(...)                   → workflow with conditional
ConversationChain(...)             → agent with memory
RetrievalQA(...)                   → agent with memory recall

# Memory
ConversationBufferMemory(...)      → agent memory
ConversationSummaryMemory(...)     → agent memory
VectorStoreMemory(...)             → agent memory with embeddings
ChatMessageHistory(...)            → session storage

# Retrievers
VectorStoreRetriever(...)          → memory recall
SelfQueryRetriever(...)            → memory search

# Embeddings
OpenAIEmbeddings(...)              → embedding worker
HuggingFaceEmbeddings(...)         → embedding worker
CohereEmbeddings(...)              → embedding worker

Model Mapping

LangChain models are mapped to AgentOS equivalents:
LangChain ModelAgentOS Model
gpt-4, gpt-4o, gpt-4-turboclaude-sonnet-4-6
gpt-4o-mini, gpt-3.5-turboclaude-haiku-3.5
claude-3-opus-20240229claude-opus-4
claude-3-sonnet-20240229claude-sonnet-4-6
claude-3-haiku-20240307claude-haiku-3.5
gemini-pro, gemini-1.5-proclaude-sonnet-4-6
llama-3-70bllama-3.3-70b
mixtral-8x7bmixtral-8x7b

Tool Mapping

Common LangChain tools are mapped:
# Web tools
SerpAPIWrapper           → tool::web_search
GoogleSearchAPIWrapper   → tool::web_search
DuckDuckGoSearchRun      → tool::web_search
BraveSearch              → tool::web_search
WikipediaQueryRun        → tool::web_fetch

# File tools
ReadFileTool             → tool::file_read
WriteFileTool            → tool::file_write
ListDirectoryTool        → tool::file_list

# Code tools
PythonREPLTool           → tool::shell_exec
BashTool                 → tool::shell_exec

# Other
CalculatorTool           → tool::calculate
RequestsGetTool          → tool::web_fetch

Post-Migration Steps

1

Install Dependencies

Ensure AgentOS is running:
# Start iii-engine
iii --config config.yaml &

# Start workers
agentos start

# Or manually
cargo run --release -p agentos-core &
npx tsx src/agent-core.ts &
npx tsx src/tools.ts &
python workers/embedding/main.py &
2

Review System Prompts

Migrated agents have generic system prompts. Customize them:
# List migrated agents
ls agents/ | grep langchain

# Edit system prompts
for agent in agents/*langchain*/agent.toml; do
  echo "Reviewing: $agent"
  vim "$agent"
done
3

Implement Custom Tools

If you have custom LangChain tools, implement them as AgentOS functions:
src/my-tools.ts
import { init } from "iii-sdk";

const { registerFunction } = init("ws://localhost:49134", {
  workerName: "my-tools"
});

registerFunction(
  { id: "fetch_data", description: "Fetch data from URL" },
  async ({ url, max_results }: any) => {
    // Your implementation
    const response = await fetch(url);
    return await response.json();
  }
);
# Run the worker
npx tsx src/my-tools.ts &
4

Test Agents

Verify each migrated agent works:
# List agents
agentos agent list | grep langchain

# Test agent
agentos message my_agent_llm_0 "Test message"

# Interactive chat
agentos chat my_agent_llm_0
5

Migrate Data

If you have conversation history in LangChain:
migrate_history.py
import json
from langchain.memory import FileChatMessageHistory

# Load LangChain history
history = FileChatMessageHistory("langchain_history.json")
messages = history.messages

# Convert to AgentOS format
agentos_history = {
    "id": "migrated-session-1",
    "agent": "my_agent_llm_0",
    "history": [
        {"role": m.type, "content": m.content}
        for m in messages
    ],
    "created": "2025-03-01T00:00:00Z",
    "migrated": "2025-03-09T15:00:00Z",
    "source": "langchain"
}

# Save
with open("data/sessions/migrated-session-1.json", "w") as f:
    json.dump(agentos_history, f, indent=2)

Advanced Migration

Custom Config Path

agentos migrate langchain --config-dir /path/to/project

Skip Specific Patterns

Edit migration output manually:
# Dry run first
agentos migrate langchain --dry-run > migration-plan.json

# Review and edit
vim migration-plan.json

# Apply manually
# (Create agents/tools based on edited plan)

Programmatic Migration

import { init } from "iii-sdk";

const { trigger } = init("ws://localhost:49134", { workerName: "migrator" });

const result = await trigger("migrate::langchain", {
  dryRun: false,
  configDir: "/path/to/langchain/project"
}, 300_000);  // 5 minute timeout

console.log(`Migrated ${result.summary.migrated} items`);
console.log(`Skipped ${result.summary.skipped} items`);
console.log(`Errors ${result.summary.errors}`);

Common Issues

If migration scan doesn’t find LangChain:
# Check for LangChain installation
pip show langchain

# Look for Python files with LangChain imports
grep -r "from langchain" .
grep -r "import langchain" .

# Specify directory explicitly
agentos migrate langchain --config-dir /path/to/project
Implement custom tools as AgentOS functions:
# Check migration report for skipped tools
agentos migrate report | grep -A 5 "tool"

# Implement as AgentOS tool
vim src/my-custom-tools.ts
npx tsx src/my-custom-tools.ts &

# Update agent config to include custom tool
vim agents/my-agent/agent.toml
# Add to tools: ["my_custom_tool"]
LangChain chains may need manual workflow creation:
# Check what was migrated
ls workflows/

# Create workflow manually if needed
vim workflows/my-workflow.toml

# Test workflow
agentos workflow run my-workflow --input '{"key": "value"}'
Ensure embedding worker is running:
# Check if embedding worker is running
ps aux | grep "python.*embedding"

# Start embedding worker
python workers/embedding/main.py &

# Verify it's registered
curl http://localhost:3111/functions | jq '.[] | select(.id | startswith("embedding::"))'

Migration Checklist

  • Run agentos migrate scan to detect LangChain
  • Review dry-run output: agentos migrate langchain --dry-run
  • Execute migration: agentos migrate langchain
  • Review migration report: agentos migrate report
  • Customize system prompts in agents/*/agent.toml
  • Implement custom tools in src/my-tools.ts
  • Configure API keys: agentos config set-key
  • Test each agent: agentos chat <agent-name>
  • Migrate conversation history (if needed)
  • Update application code to use AgentOS API
  • Remove LangChain dependencies: pip uninstall langchain

Next Steps

Creating Agents

Customize your migrated agents

Creating Tools

Build custom tools for your agents

Testing

Test your migrated setup

Migration Overview

General migration guide

Build docs developers (and LLMs) love