Skip to main content

Overview

This guide will help you create your first MoFA agent. By the end, you’ll have a working LLM-powered agent that can answer questions and perform tasks.
Prerequisites: You need Rust 1.85 or newer. If you don’t have Rust installed, see the Installation guide.

Set up your environment

1

Clone the repository

Get the MoFA source code:
git clone https://github.com/mofa-org/mofa.git
cd mofa
2

Verify installation

Build the project to ensure everything works:
cargo build
cargo test -p mofa-sdk
This will download dependencies and compile the framework. It may take a few minutes on first run.
3

Configure your LLM provider

Create a .env file in your project root:
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4o
MoFA supports any OpenAI-compatible endpoint. Use Ollama for local models or OpenRouter for access to multiple providers.

Run your first example

Let’s run a simple chat example to verify everything works:
cd examples
cargo run -p chat_stream
You should see output like:
========================================
  MoFA LLM Agent
========================================

Agent loaded: LLM Agent
Agent ID: llm-agent-...

--- Chat Demo ---

Q: Hello! What can you help me with?
A: I'm an AI assistant that can help you with...
If you see this output, congratulations! Your MoFA installation is working correctly.

Create your first agent

Now let’s build a custom agent from scratch. Create a new Rust project:
cargo new my_agent
cd my_agent

Add dependencies

Edit Cargo.toml:
[dependencies]
mofa-sdk = { git = "https://github.com/mofa-org/mofa", branch = "main" }
tokio = { version = "1", features = ["full"] }
async-trait = "0.1"
dotenvy = "0.15"
tracing = "0.1"
tracing-subscriber = "0.3"

Write your agent

Create src/main.rs with this code:
use std::sync::Arc;
use dotenvy::dotenv;
use mofa_sdk::kernel::agent::prelude::*;
use mofa_sdk::llm::{LLMClient, openai_from_env};

struct LLMAgent {
    id: String,
    name: String,
    capabilities: AgentCapabilities,
    state: AgentState,
    client: LLMClient,
}

impl LLMAgent {
    fn new(client: LLMClient) -> Self {
        Self {
            id: "llm-agent-1".to_string(),
            name: "LLM Agent".to_string(),
            capabilities: AgentCapabilities::builder()
                .tag("llm").tag("qa")
                .input_type(InputType::Text)
                .output_type(OutputType::Text)
                .build(),
            state: AgentState::Created,
            client,
        }
    }
}

#[async_trait::async_trait]
impl MoFAAgent for LLMAgent {
    fn id(&self)           -> &str               { &self.id }
    fn name(&self)         -> &str               { &self.name }
    fn capabilities(&self) -> &AgentCapabilities { &self.capabilities }
    fn state(&self)        -> AgentState         { self.state.clone() }

    async fn initialize(&mut self, _ctx: &AgentContext) -> AgentResult<()> {
        self.state = AgentState::Ready;
        Ok(())
    }

    async fn execute(&mut self, input: AgentInput, _ctx: &AgentContext) -> AgentResult<AgentOutput> {
        self.state = AgentState::Executing;
        let answer = self.client
            .ask_with_system("You are a helpful Rust expert.", &input.to_text())
            .await
            .map_err(|e| AgentError::ExecutionFailed(e.to_string()))?;
        self.state = AgentState::Ready;
        Ok(AgentOutput::text(answer))
    }

    async fn shutdown(&mut self) -> AgentResult<()> {
        self.state = AgentState::Shutdown;
        Ok(())
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenv().ok();
    tracing_subscriber::fmt::init();

    let provider = openai_from_env()?;
    let client   = LLMClient::new(Arc::new(provider));

    let mut agent = LLMAgent::new(client);
    let ctx       = AgentContext::new("exec-001");

    agent.initialize(&ctx).await?;

    let output = agent.execute(
        AgentInput::text("What is the borrow checker in Rust?"),
        &ctx,
    ).await?;

    println!("{}", output.as_text().unwrap_or("(no answer)"));
    agent.shutdown().await?;
    Ok(())
}

Run your agent

Make sure your .env file has your API key, then run:
cargo run
You should see a detailed explanation of Rust’s borrow checker!

Understanding the code

Let’s break down what this agent does:
1

Define the agent struct

struct LLMAgent {
    id: String,
    name: String,
    capabilities: AgentCapabilities,
    state: AgentState,
    client: LLMClient,
}
Every MoFA agent needs an ID, name, capabilities, and state. The LLMClient handles communication with the LLM provider.
2

Implement MoFAAgent trait

#[async_trait::async_trait]
impl MoFAAgent for LLMAgent {
    fn id(&self) -> &str { &self.id }
    // ... other methods
}
The MoFAAgent trait defines the interface all agents must implement. This includes lifecycle methods (initialize, execute, shutdown) and metadata accessors.
3

Execute method

async fn execute(&mut self, input: AgentInput, _ctx: &AgentContext) -> AgentResult<AgentOutput> {
    self.state = AgentState::Executing;
    let answer = self.client
        .ask_with_system("You are a helpful Rust expert.", &input.to_text())
        .await
        .map_err(|e| AgentError::ExecutionFailed(e.to_string()))?;
    self.state = AgentState::Ready;
    Ok(AgentOutput::text(answer))
}
The execute method is where the agent does its work. It receives input, processes it (here by calling the LLM), and returns output.

Explore more examples

MoFA includes 27+ examples demonstrating different features:

ReAct agent

Reasoning + Acting pattern with tool use
cargo run -p react_agent

Secretary agent

Human-in-the-loop workflow management
cargo run -p secretary_agent

Multi-agent coordination

Multiple agents collaborating
cargo run -p multi_agent_coordination

Rhai hot reload

Runtime script hot-reloading
cargo run -p rhai_hot_reload

Example: ReAct agent with tools

The ReAct pattern combines reasoning and acting. Here’s a simplified version:
use mofa_sdk::react::{ReActAgent, ReActTool};
use mofa_sdk::llm::{LLMAgent, LLMAgentBuilder};
use std::sync::Arc;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create LLM agent
    let llm_agent = Arc::new(
        LLMAgentBuilder::from_env()?
            .with_system_prompt("You are a helpful assistant.")
            .build()
    );

    // Create ReAct agent with tools
    let react_agent = ReActAgent::builder()
        .with_llm(llm_agent)
        .with_tool(Arc::new(WebSearchTool))
        .with_tool(Arc::new(CalculatorTool))
        .with_max_iterations(5)
        .build_async()
        .await?;

    // Run a task
    let result = react_agent
        .run("What is Rust and when was it first released?")
        .await?;

    println!("Answer: {}", result.answer);
    Ok(())
}
Run the full example:
cd examples
cargo run -p react_agent

Using the high-level API

For simpler use cases, use the high-level LLMAgent:
use mofa_sdk::llm::LLMAgentBuilder;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Build agent from environment
    let agent = LLMAgentBuilder::from_env()?
        .with_name("My Assistant")
        .with_system_prompt("You are a helpful coding assistant.")
        .with_temperature(0.7)
        .build();

    // Simple question-answer
    let answer = agent.ask("Explain async/await in Rust").await?;
    println!("{}", answer);

    // Multi-turn conversation
    agent.chat("My name is Alice").await?;
    let response = agent.chat("What's my name?").await?;
    println!("{}", response); // Should remember "Alice"

    Ok(())
}

Next steps

Installation guide

Learn about different installation methods and OS-specific setup

Architecture overview

Understand the microkernel and plugin system

API reference

Explore the complete Rust API documentation

Examples

Browse all 27+ examples
Tip: Join our Discord community to get help, share your projects, and connect with other MoFA developers.

Build docs developers (and LLMs) love