Skip to main content
LangChain is a popular framework for building applications with large language models. Superserve provides a production-ready deployment platform with isolation, persistence, and governance.

Quick Start

1

Install the CLI

curl -fsSL https://superserve.ai/install | sh
2

Create your agent

Create a file called agent.py with your LangChain application:
agent.py
"""
Minimal chatbot built with LangChain deployed on Superserve.
"""

from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")
system = SystemMessage(content="You are a helpful assistant.")

while True:
    try:
        user_input = input()
    except EOFError:
        break
    response = llm.invoke([system, HumanMessage(content=user_input)])
    print(response.content)
3

Deploy your agent

Log in and deploy your agent:
superserve login
superserve deploy agent.py --name chatbot
4

Set your API key

Configure your OpenAI API key as a secret:
superserve secrets set chatbot OPENAI_API_KEY=sk-...
Secrets are encrypted at rest and injected at the network level. The agent never sees them in logs or LLM context.
5

Run your agent

Start an interactive session:
superserve run chatbot
You > What is the capital of France?

Agent > The capital of France is Paris.

Completed in 1.2s

Using Different LLM Providers

LangChain supports multiple LLM providers. Here are some examples:
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")
Make sure to set the appropriate API key:
# For OpenAI
superserve secrets set chatbot OPENAI_API_KEY=sk-...

# For Anthropic
superserve secrets set chatbot ANTHROPIC_API_KEY=sk-ant-...

# For Google
superserve secrets set chatbot GOOGLE_API_KEY=...

# For Cohere
superserve secrets set chatbot COHERE_API_KEY=...

Conversation Memory

Add conversation memory to maintain context across turns:
agent.py
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain

llm = ChatOpenAI(model="gpt-4o")
memory = ConversationBufferMemory()
conversation = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=False
)

while True:
    try:
        user_input = input()
    except EOFError:
        break
    response = conversation.predict(input=user_input)
    print(response)

Adding Tools

LangChain supports tools for extending agent capabilities:
agent.py
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

@tool
def get_weather(location: str) -> str:
    """Get the current weather for a location.
    
    Args:
        location: The city name
    """
    # Your weather API logic here
    return f"The weather in {location} is sunny."

llm = ChatOpenAI(model="gpt-4o")
tools = [get_weather]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful weather assistant."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])

agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=False)

while True:
    try:
        user_input = input()
    except EOFError:
        break
    result = agent_executor.invoke({"input": user_input})
    print(result["output"])

RAG (Retrieval-Augmented Generation)

Build a RAG system with LangChain and persist the vector store:
agent.py
from pathlib import Path
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain.chains import RetrievalQA
from langchain_core.documents import Document

# Persistent storage
WORKSPACE = Path("/workspace")
VECTOR_STORE_PATH = WORKSPACE / "vector_store"

llm = ChatOpenAI(model="gpt-4o")
embeddings = OpenAIEmbeddings()

# Load or create vector store
if VECTOR_STORE_PATH.exists():
    vectorstore = FAISS.load_local(
        str(VECTOR_STORE_PATH),
        embeddings,
        allow_dangerous_deserialization=True
    )
    print("Loaded existing vector store")
else:
    # Initialize with sample documents
    documents = [
        Document(page_content="Paris is the capital of France."),
        Document(page_content="London is the capital of the UK."),
    ]
    vectorstore = FAISS.from_documents(documents, embeddings)
    vectorstore.save_local(str(VECTOR_STORE_PATH))
    print("Created new vector store")

qa_chain = RetrievalQA.from_chain_type(
    llm=llm,
    retriever=vectorstore.as_retriever(),
)

while True:
    try:
        user_input = input()
    except EOFError:
        break
    result = qa_chain.invoke({"query": user_input})
    print(result["result"])

Deployment Configuration

Create a superserve.yaml file for advanced deployment options:
superserve.yaml
name: chatbot
command: python agent.py
secrets:
  - OPENAI_API_KEY
ignore:
  - "*.pyc"
  - __pycache__
  - .git
  - vector_store/
Then deploy with:
superserve deploy

Dependencies

Create a requirements.txt with your dependencies:
requirements.txt
langchain
langchain-openai
langchain-anthropic
langchain-community
faiss-cpu
python-dotenv
Or use pyproject.toml:
pyproject.toml
[project]
name = "my-chatbot"
version = "0.1.0"
dependencies = [
    "langchain",
    "langchain-openai",
    "langchain-anthropic",
    "langchain-community",
    "faiss-cpu",
    "python-dotenv",
]
Superserve automatically installs dependencies during deployment.

LangChain Expression Language (LCEL)

Use LCEL for composable chains:
agent.py
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = ChatOpenAI(model="gpt-4o")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{input}"),
])

chain = prompt | llm | StrOutputParser()

while True:
    try:
        user_input = input()
    except EOFError:
        break
    response = chain.invoke({"input": user_input})
    print(response)

Troubleshooting

Make sure you have a requirements.txt or pyproject.toml with langchain and related packages listed. Redeploy your agent:
superserve deploy agent.py --name chatbot
Set your API key as a secret:
superserve secrets set chatbot OPENAI_API_KEY=sk-...
Make sure you’re saving to /workspace and not excluding it in your .gitignore or superserve.yaml:
VECTOR_STORE_PATH = Path("/workspace") / "vector_store"
vectorstore.save_local(str(VECTOR_STORE_PATH))

Next Steps

Core Concepts

Learn about isolation, persistence, and credentials

CLI Reference

Explore deployment options and CLI commands

Secrets Management

Manage API keys and environment variables

Session Management

Work with persistent sessions

Build docs developers (and LLMs) love