Welcome to LangChain
LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development – all while future-proofing decisions as the underlying technology evolves.Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.Real-time data augmentation
Connect LLMs to diverse data sources and external systems with our vast library of integrations
Model interoperability
Swap models in and out as the industry evolves - LangChain’s abstractions keep you moving
Rapid prototyping
Build and iterate quickly with modular, component-based architecture
Production-ready features
Deploy with built-in monitoring, evaluation, and debugging through LangSmith
Vibrant ecosystem
Leverage integrations, templates, and community-contributed components
Flexible abstraction layers
Work at any level - from high-level chains to low-level components
Key features
Composable components
Composable components
Build LLM applications by composing modular, reusable components. Chain together prompts, models, output parsers, and tools using the LCEL (LangChain Expression Language) syntax.
Multi-agent orchestration
Multi-agent orchestration
Create sophisticated agent workflows with LangGraph. Build agents that can plan, use subagents, and handle complex tasks with long-term memory and human-in-the-loop workflows.
Extensive integrations
Extensive integrations
Connect to 100+ model providers including OpenAI, Anthropic, Google, Ollama, and more. Access vector stores, document loaders, and tools through standardized interfaces.
Retrieval augmented generation
Retrieval augmented generation
Build RAG applications with built-in support for document loading, text splitting, embeddings, vector stores, and retrieval patterns.
Streaming and async
Streaming and async
Stream responses token-by-token for better user experience. Full async support for concurrent operations and high-throughput applications.
Production monitoring
Production monitoring
Integrate with LangSmith for debugging, evaluation, and monitoring. Track agent trajectories and improve performance over time.
LangChain ecosystem
While LangChain can be used standalone, it integrates seamlessly with the broader LangChain ecosystem:LangGraph
Low-level agent orchestration framework for building controllable, stateful agent workflows
Deep Agents
Build agents that can plan, use subagents, and leverage file systems for complex tasks
LangSmith
Debug, evaluate, and monitor LLM applications. Gain visibility into production performance
LangSmith Deployment
Deploy and scale agents with a purpose-built platform for stateful workflows
Get started
Quickstart
Build your first LLM application in minutes
Installation
Install LangChain and configure dependencies
Core concepts
Understand the framework architecture and key abstractions
Integrations
Explore available model providers and tools
Community and support
GitHub
Star us on GitHub and contribute to the project
Forum
Connect with the community and ask questions
Follow us for updates and announcements
Looking for the JavaScript/TypeScript version? Check out LangChain.js.