Quick Start
Set your API key
Configure your OpenAI API key as a secret:
Secrets are encrypted at rest and injected at the network level. The agent never sees them in logs or LLM context.
Using Different LLM Providers
LangChain supports multiple LLM providers. Here are some examples:Conversation Memory
Add conversation memory to maintain context across turns:agent.py
Adding Tools
LangChain supports tools for extending agent capabilities:agent.py
RAG (Retrieval-Augmented Generation)
Build a RAG system with LangChain and persist the vector store:agent.py
Deployment Configuration
Create asuperserve.yaml file for advanced deployment options:
superserve.yaml
Dependencies
Create arequirements.txt with your dependencies:
requirements.txt
pyproject.toml:
pyproject.toml
LangChain Expression Language (LCEL)
Use LCEL for composable chains:agent.py
Troubleshooting
Import error: No module named 'langchain'
Import error: No module named 'langchain'
Make sure you have a
requirements.txt or pyproject.toml with langchain and related packages listed. Redeploy your agent:API key not found
API key not found
Set your API key as a secret:
Vector store not persisting
Vector store not persisting
Make sure you’re saving to
/workspace and not excluding it in your .gitignore or superserve.yaml:Next Steps
Core Concepts
Learn about isolation, persistence, and credentials
CLI Reference
Explore deployment options and CLI commands
Secrets Management
Manage API keys and environment variables
Session Management
Work with persistent sessions