Skip to main content

Overview

This example demonstrates a complete multi-agent system built on MoFA’s microkernel architecture. It showcases:
  • Master-worker coordination pattern
  • LLM-powered intelligent task assignment
  • Worker agents with different specialties (Analyst, Coder, Writer)
  • Runtime agent registration and lifecycle management
  • Message bus communication between agents
  • Real-time workload balancing

What You’ll Learn

  • Implementing MoFAAgent trait for custom agents
  • Using SimpleRuntime for agent lifecycle management
  • LLM-based decision making for task routing
  • Agent capability matching and workload optimization
  • Message bus pattern for inter-agent communication

Prerequisites

  • Rust 1.75 or higher
  • OpenAI API key
  • Understanding of async patterns and message passing

Architecture

Source Code

use mofa_sdk::kernel::{AgentConfig, MoFAAgent, AgentState};
use mofa_sdk::llm::LLMClient;
use std::sync::Arc;

pub struct MasterAgent {
    id: String,
    name: String,
    state: AgentState,
    llm_client: Arc<LLMClient>,
    worker_states: Arc<RwLock<HashMap<String, WorkerState>>>,
}

impl MasterAgent {
    pub fn new(config: AgentConfig, llm_client: Arc<LLMClient>) -> Self {
        // Implementation...
    }

    /// Register a worker with specialty
    pub async fn register_worker(
        &self, 
        worker_id: String, 
        specialty: WorkerSpecialty
    ) {
        let state = WorkerState {
            specialty,
            load: 0,
            tasks_completed: 0,
            tasks_failed: 0,
        };
        self.worker_states.write().await.insert(worker_id, state);
    }

    /// Use LLM to intelligently assign tasks
    async fn assign_task_with_llm(
        &self, 
        task: &TaskRequest
    ) -> Result<String, Box<dyn std::error::Error>> {
        let workers_map = self.worker_states.read().await;
        
        // Build worker description
        let worker_info: Vec<String> = workers_map
            .iter()
            .map(|(id, state)| {
                format!(
                    "- {}: {} (load: {}, completed: {})",
                    id, state.specialty, state.load, state.tasks_completed
                )
            })
            .collect();

        // Ask LLM to select best worker
        let prompt = format!(
            "Available workers:\n{}\n\n\
             Task: {}\nPriority: {:?}\n\n\
             Select the best worker based on specialty and load.",
            worker_info.join("\n"), task.content, task.priority
        );

        let response = self.llm_client
            .chat()
            .system("You are a task dispatcher")
            .user(&prompt)
            .temperature(0.3)
            .send()
            .await?;

        let worker_id = self.parse_worker_id(
            response.content().unwrap_or(""),
            &workers_map
        ).await?;

        Ok(worker_id)
    }
}

Running the Example

1
Set API Key
2
export OPENAI_API_KEY="your-api-key"
3
Run All Scenarios
4
cd examples/multi_agent_coordination
cargo run
5
Run Specific Scenario
6
# Code review
cargo run -- --scenario=code-review

# Documentation generation
cargo run -- --scenario=doc-generation

# Problem diagnosis
cargo run -- --scenario=diagnosis

Demo Scenarios

Scenario: Analyze code for security and performance issues
let code_snippet = r#"
fn process_data(input: &str) -> String {
    let mut result = String::new();
    for ch in input.chars() {
        if ch.is_ascii() {
            result.push(ch.to_ascii_uppercase());
        }
    }
    result
}
"#;

let task = TaskRequest {
    task_id: "review_001".to_string(),
    content: format!("Analyze this code: {}", code_snippet),
    priority: TaskPriority::High,
};
Master: Assigns to Analyst worker based on capabilitiesAnalyst: Reviews code and provides detailed feedback

Expected Output

======================================================================
 MoFA Multi-Agent Coordination - Microkernel Architecture
======================================================================

LLM Provider initialized
SimpleRuntime initialized
Master Agent registered to runtime
Worker Pool initialized with 3 workers

======================================================================
Scenario 1: Code Review Collaboration
======================================================================

Master: Assigned task 'review_001' to worker 'worker_001'
Worker Agent (worker_001): Processing task 'review_001'
Worker Agent (worker_001): Completed task 'review_001' in 2341ms

Final Statistics:
worker_001: 1 tasks completed, 0 failed, load 0
worker_002: 0 tasks completed, 0 failed, load 0
worker_003: 0 tasks completed, 0 failed, load 0

======================================================================
 Demo completed successfully!
======================================================================

Key Concepts

Master-Worker Pattern

The Master Agent:
  • Receives task requests
  • Analyzes worker capabilities and load
  • Uses LLM to make intelligent routing decisions
  • Tracks task completion and worker statistics
Worker Agents:
  • Specialize in specific domains
  • Process assigned tasks using LLM
  • Report status back to master
  • Maintain performance metrics

LLM-Based Task Routing

// Master uses LLM to decide worker assignment
let prompt = format!(
    "Available workers:\n{}\n\nTask: {}\n\n\
     Select the best worker based on:\
     1. Specialty match\n\
     2. Current workload\n\
     3. Past performance",
    worker_list, task.content
);

let response = llm_client.chat()
    .system("You are an intelligent task dispatcher")
    .user(&prompt)
    .send()
    .await?;

Runtime Management

// Register agents to runtime
let runtime = SimpleRuntime::new();

runtime.register_agent(
    metadata,
    config,
    agent_type
).await?;

// Broadcast events
runtime.broadcast(AgentEvent::Custom(
    "task_request",
    task_data
)).await?;

// Cleanup
runtime.stop_all().await?;

Advanced Features

Workers can be added/removed at runtime:
// Add new worker
master.register_worker(
    "worker_004".to_string(),
    WorkerSpecialty::Analyst
).await;

// Remove worker
master.unregister_worker("worker_001").await;
Master tracks and balances workload:
let worker_id = workers
    .iter()
    .min_by_key(|(_, state)| state.load)
    .map(|(id, _)| id.clone())
    .unwrap();
Tasks are prioritized:
pub enum TaskPriority {
    Low,
    Medium,
    High,
    Critical,
}
Each worker maintains metrics:
pub struct WorkerState {
    pub specialty: WorkerSpecialty,
    pub load: usize,
    pub tasks_completed: usize,
    pub tasks_failed: usize,
}

Common Use Cases

Code Analysis

Distribute code review across specialized agents

Content Pipeline

Research → Writing → Editing workflow

Customer Support

Route tickets to specialized support agents

Data Processing

Parallel data transformation and analysis

Troubleshooting

Problem: All tasks go to one workerSolution: Improve LLM prompt or add load balancing:
if state.load > MAX_LOAD {
    continue; // Skip overloaded workers
}
Problem: Tasks take too longSolution: Add timeout and retry logic:
tokio::time::timeout(
    Duration::from_secs(30),
    worker.process_task(task)
).await??;
Problem: Messages not reaching agentsSolution: Check runtime registration:
let agents = runtime.list_agents().await;
println!("Registered: {:?}", agents);

Next Steps

Secretary Agent

Add human-in-the-loop workflows

Workflow Orchestration

Build complex multi-stage workflows

Coordination Guide

Learn all coordination patterns

Runtime API

Runtime system reference

Build docs developers (and LLMs) love