Overview
This example demonstrates how to create an LLM-powered chat agent with streaming capabilities. You’ll learn how to:- Create an LLM agent using
OpenAIProvider - Perform simple Q&A without context retention
- Build multi-turn conversations with context
- Stream responses in real-time
- Get both streaming and full responses
What You’ll Learn
- Setting up OpenAI provider from environment
- Using
LLMAgentBuilderfor agent configuration - Difference between
ask()andchat()methods - Implementing streaming with
ask_stream()andchat_stream() - Handling streaming responses with tokio-stream
Prerequisites
- Rust 1.75 or higher
- OpenAI API key (set as
OPENAI_API_KEYenvironment variable) - Optional: Custom API endpoint (e.g., Ollama) via
OPENAI_BASE_URL
Complete Source Code
View the complete example from the MoFA repository:Running the Example
export OPENAI_API_KEY="your-api-key-here"
# Optional: Use custom endpoint (e.g., Ollama)
export OPENAI_BASE_URL="http://localhost:11434/v1"
Expected Output
Key Concepts
ask() vs chat()
ask() - Stateless Q&A
ask() - Stateless Q&A
The
ask() method performs stateless queries. Each call is independent with no context retention.chat() - Contextual Conversation
chat() - Contextual Conversation
The
chat() method maintains conversation context across multiple turns.Streaming Responses
MoFA provides multiple streaming methods:- ask_stream()
- chat_stream()
- chat_stream_with_full()
Stream responses for stateless queries:
Configuration Options
Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
OPENAI_API_KEY | Yes | - | Your OpenAI API key |
OPENAI_BASE_URL | No | https://api.openai.com/v1 | Custom API endpoint |
OPENAI_MODEL | No | gpt-4 | Model to use |
Builder Options
Common Use Cases
Customer Support
Build chatbots with context-aware responses
Content Generation
Stream generated content in real-time
Code Assistant
Help users with programming questions
Data Analysis
Query and analyze data conversationally
Troubleshooting
API Key Error
API Key Error
Error:
OPENAI_API_KEY not foundSolution: Set your API key:Connection Timeout
Connection Timeout
Error:
Connection timeoutSolution: Check your network or use a proxy:Rate Limiting
Rate Limiting
Error:
Rate limit exceededSolution: Implement retry logic or upgrade your API planNext Steps
ReAct Agent
Add reasoning and tool use
Multi-Agent
Coordinate multiple agents
LLM Integration
Deep dive into LLM features
Streaming Guide
Advanced streaming patterns