Skip to main content

Overview

MoFA provides a comprehensive set of coordination protocols for multi-agent collaboration. All protocols support optional LLM integration for intelligent decision-making and message processing.

Architecture

┌─────────────────────────────────────────────────────────────┐
│            LLM-Driven Collaboration Architecture            │
├─────────────────────────────────────────────────────────────┤
│  ┌──────────────┐     ┌──────────────┐     ┌──────────────┐│
│  │ Task Analysis│────▶│Mode Selection│────▶│Protocol Exec ││
│  │    (LLM)     │     │    (LLM)     │     │(LLM-Assisted)││
│  └──────────────┘     └──────────────┘     └──────────────┘│
└─────────────────────────────────────────────────────────────┘

Core Types

CollaborationMode

Defines the communication pattern between agents.
pub enum CollaborationMode {
    RequestResponse,     // One-to-one deterministic tasks
    PublishSubscribe,    // One-to-many broadcast
    Consensus,           // Multi-agent agreement
    Debate,              // Iterative refinement
    Parallel,            // Concurrent execution
    Sequential,          // Pipeline processing
    Custom(String),      // LLM-interpreted custom mode
}

RequestResponse

Synchronous one-to-one communication with explicit return. Best for: data queries, deterministic tasks, simple Q&A.

PublishSubscribe

Asynchronous one-to-many broadcast. Best for: event propagation, creative generation, notifications.

Consensus

Multi-round negotiation and voting. Best for: decision-making, proposal selection, quality review.

Debate

Turn-based discussion with refinement. Best for: code review, solution optimization, dispute resolution.

Parallel

Simultaneous execution with aggregation. Best for: data analysis, batch processing, distributed search.

Sequential

Serial execution of dependent tasks. Best for: pipeline processing, phased workflows.

CollaborationMessage

Message format for agent communication.
pub struct CollaborationMessage {
    pub id: String,
    pub sender: String,
    pub receiver: Option<String>,
    pub topic: Option<String>,
    pub content: CollaborationContent,
    pub mode: CollaborationMode,
    pub timestamp: u64,
    pub metadata: HashMap<String, String>,
}
id
String
Unique message identifier (UUID v7)
sender
String
required
Sender agent ID
receiver
Option<String>
Target agent ID (None for broadcast)
topic
Option<String>
Topic for publish-subscribe mode
content
CollaborationContent
required
Message content (LLM-understandable)
mode
CollaborationMode
required
Collaboration mode
metadata
HashMap<String, String>
Additional metadata
Builder Methods:
let msg = CollaborationMessage::new(
    "agent_001",
    "Analyze this dataset",
    CollaborationMode::RequestResponse,
)
.with_receiver("agent_002")
.with_topic("data_analysis")
.with_metadata("priority".to_string(), "high".to_string());

CollaborationContent

Message content supporting multiple formats.
pub enum CollaborationContent {
    Text(String),
    Data(serde_json::Value),
    Mixed { text: String, data: serde_json::Value },
    LLMResponse {
        reasoning: String,
        conclusion: String,
        data: serde_json::Value,
    },
}
Text
Plain natural language text
CollaborationContent::Text("Process this data".to_string())
Data
Structured JSON data
CollaborationContent::Data(serde_json::json!({
    "dataset": "sales_2024.csv",
    "operation": "analyze"
}))
Mixed
Combined text and data
CollaborationContent::Mixed {
    text: "Analyze sales data".to_string(),
    data: serde_json::json!({"year": 2024}),
}
LLMResponse
LLM-generated response with reasoning
CollaborationContent::LLMResponse {
    reasoning: "Analysis shows...".to_string(),
    conclusion: "Recommendation: ...".to_string(),
    data: serde_json::json!({"confidence": 0.95}),
}

CollaborationResult

Execution result with LLM decision context.
pub struct CollaborationResult {
    pub success: bool,
    pub data: Option<CollaborationContent>,
    pub error: Option<String>,
    pub duration_ms: u64,
    pub participants: Vec<String>,
    pub mode: CollaborationMode,
    pub decision_context: Option<DecisionContext>,
}
success
bool
Whether execution succeeded
data
Option<CollaborationContent>
Result data
error
Option<String>
Error message if failed
duration_ms
u64
Execution time in milliseconds
participants
Vec<String>
IDs of participating agents
mode
CollaborationMode
Mode used for execution
decision_context
Option<DecisionContext>
LLM’s decision information

DecisionContext

Records LLM’s reasoning for protocol selection.
pub struct DecisionContext {
    pub reasoning: String,
    pub task_analysis: String,
    pub alternatives: Vec<CollaborationMode>,
    pub confidence: f32,
}
reasoning
String
Why LLM chose this mode
task_analysis
String
LLM’s analysis of the task
alternatives
Vec<CollaborationMode>
Other modes LLM considered
confidence
f32
Confidence level (0.0 - 1.0)

Protocol Trait

CollaborationProtocol

Core trait all protocols must implement.
#[async_trait]
pub trait CollaborationProtocol: Send + Sync {
    fn name(&self) -> &str;
    fn mode(&self) -> CollaborationMode;
    fn description(&self) -> &str;
    fn applicable_scenarios(&self) -> Vec<String>;
    
    async fn send_message(&self, msg: CollaborationMessage) -> GlobalResult<()>;
    async fn receive_message(&self) -> GlobalResult<Option<CollaborationMessage>>;
    async fn process_message(
        &self,
        msg: CollaborationMessage,
    ) -> GlobalResult<CollaborationResult>;
    
    fn is_available(&self) -> bool { true }
    fn stats(&self) -> HashMap<String, serde_json::Value> { HashMap::new() }
}
name
fn() -> &str
required
Protocol identifier
mode
fn() -> CollaborationMode
required
Collaboration mode
description
fn() -> &str
Human/LLM-readable description
applicable_scenarios
fn() -> Vec<String>
Use cases for LLM to consider
send_message
async fn
required
Send collaboration message
receive_message
async fn
required
Receive collaboration message
process_message
async fn
required
Process message and return result

Protocol Implementations

RequestResponseProtocol

One-to-one synchronous communication.
// Without LLM
let protocol = RequestResponseProtocol::new("agent_001");

// With LLM
let protocol = RequestResponseProtocol::with_llm(
    "agent_001",
    llm_client.clone(),
);

let msg = CollaborationMessage::new(
    "agent_001",
    "Query user data",
    CollaborationMode::RequestResponse,
).with_receiver("agent_002");

let result = protocol.process_message(msg).await?;
Use Cases:
  • Data queries and retrieval
  • Deterministic task execution
  • Status requests
  • Simple question-answering

PublishSubscribeProtocol

One-to-many asynchronous broadcast.
let protocol = PublishSubscribeProtocol::with_llm(
    "agent_001",
    llm_client.clone(),
);

// Subscribe to topics
protocol.subscribe("events".to_string()).await?;
protocol.subscribe("alerts".to_string()).await?;

// Publish message
let msg = CollaborationMessage::new(
    "agent_001",
    "System update available",
    CollaborationMode::PublishSubscribe,
).with_topic("events");

protocol.send_message(msg).await?;
Use Cases:
  • Event propagation
  • Creative brainstorming
  • Notification broadcasting
  • Multi-party collaboration

ConsensusProtocol

Multi-agent agreement through negotiation.
let protocol = ConsensusProtocol::with_llm(
    "agent_001",
    llm_client.clone(),
);

let msg = CollaborationMessage::new(
    "agent_001",
    "Approve this design proposal",
    CollaborationMode::Consensus,
);

let result = protocol.process_message(msg).await?;
Use Cases:
  • Decision-making
  • Voting and evaluation
  • Proposal selection
  • Quality review

DebateProtocol

Iterative refinement through discussion.
let protocol = DebateProtocol::with_llm(
    "agent_001",
    llm_client.clone(),
);

let msg = CollaborationMessage::new(
    "agent_001",
    "Review this code implementation",
    CollaborationMode::Debate,
);

let result = protocol.process_message(msg).await?;
Use Cases:
  • Code review
  • Solution optimization
  • Dispute resolution
  • Quality improvement

ParallelProtocol

Concurrent task execution.
let protocol = ParallelProtocol::with_llm(
    "agent_001",
    llm_client.clone(),
);

let msg = CollaborationMessage::new(
    "agent_001",
    "Analyze multiple datasets",
    CollaborationMode::Parallel,
);

let result = protocol.process_message(msg).await?;
Use Cases:
  • Data analysis
  • Batch processing
  • Distributed search
  • Parallel computation

SequentialProtocol

Serial execution of dependent tasks.
let protocol = SequentialProtocol::with_llm(
    "agent_001",
    llm_client.clone(),
);

let msg = CollaborationMessage::new(
    "agent_001",
    "Process pipeline stages",
    CollaborationMode::Sequential,
);

let result = protocol.process_message(msg).await?;
Use Cases:
  • Pipeline processing
  • Dependent task chains
  • Step-by-step execution
  • Phased workflows

Collaboration Manager

LLMDrivenCollaborationManager

Manages protocol selection and execution.
let manager = LLMDrivenCollaborationManager::new("agent_001");

// Register protocols
manager.register_protocol(Arc::new(
    RequestResponseProtocol::with_llm("agent_001", llm_client.clone())
)).await?;

manager.register_protocol(Arc::new(
    ParallelProtocol::with_llm("agent_001", llm_client.clone())
)).await?;

// Execute with specific protocol
let result = manager.execute_task_with_protocol(
    "request_response",
    "Process data query",
).await?;
new
fn
Create new manager
pub fn new(agent_id: impl Into<String>) -> Self
register_protocol
async fn
Register a protocol
pub async fn register_protocol(
    &self,
    protocol: Arc<dyn CollaborationProtocol>,
) -> GlobalResult<()>
execute_task_with_protocol
async fn
Execute task using specific protocol
pub async fn execute_task_with_protocol(
    &self,
    protocol_name: &str,
    content: impl Into<CollaborationContent>,
) -> GlobalResult<CollaborationResult>
send_message
async fn
Send collaboration message
pub async fn send_message(
    &self,
    msg: CollaborationMessage,
) -> GlobalResult<()>
receive_message
async fn
Receive collaboration message
pub async fn receive_message(
    &self
) -> GlobalResult<Option<CollaborationMessage>>
stats
async fn
Get collaboration statistics
pub async fn stats(&self) -> CollaborationStats

Protocol Registry

ProtocolRegistry

Registry for managing available protocols.
let registry = ProtocolRegistry::new();

// Register protocol
registry.register(Arc::new(protocol)).await?;

// Query protocols
let protocol = registry.get("request_response").await;
let all_protocols = registry.list_all().await;
let names = registry.list_names().await;
let descriptions = registry.get_descriptions().await;
register
async fn
Register protocol
pub async fn register(
    &self,
    protocol: Arc<dyn CollaborationProtocol>,
) -> GlobalResult<()>
get
async fn
Get protocol by name
pub async fn get(
    &self,
    name: &str,
) -> Option<Arc<dyn CollaborationProtocol>>
get_descriptions
async fn
Get all protocol descriptions for LLM
pub async fn get_descriptions(
    &self
) -> HashMap<String, ProtocolDescription>

Statistics

CollaborationStats

Aggregated collaboration metrics.
pub struct CollaborationStats {
    pub total_tasks: u64,
    pub successful_tasks: u64,
    pub failed_tasks: u64,
    pub mode_usage: HashMap<String, u64>,
    pub avg_duration_ms: f64,
    pub llm_decisions: LLMDecisionStats,
}
total_tasks
u64
Total tasks executed
successful_tasks
u64
Successfully completed tasks
failed_tasks
u64
Failed tasks
mode_usage
HashMap<String, u64>
Usage count per collaboration mode
avg_duration_ms
f64
Average execution time
llm_decisions
LLMDecisionStats
LLM decision statistics

LLMDecisionStats

LLM-specific decision metrics.
pub struct LLMDecisionStats {
    pub total_decisions: u64,
    pub mode_selections: HashMap<String, u64>,
    pub avg_confidence: f32,
}

Complete Example

use mofa_foundation::collaboration::*;
use mofa_foundation::llm::*;
use std::sync::Arc;

#[tokio::main]
async fn main() -> GlobalResult<()> {
    // Create LLM client
    let provider = Arc::new(create_openai_provider());
    let llm_client = Arc::new(LLMClient::new(provider));
    
    // Create manager
    let manager = LLMDrivenCollaborationManager::new("agent_001");
    
    // Register protocols with LLM
    manager.register_protocol(Arc::new(
        RequestResponseProtocol::with_llm(
            "agent_001",
            llm_client.clone(),
        )
    )).await?;
    
    manager.register_protocol(Arc::new(
        ParallelProtocol::with_llm(
            "agent_001",
            llm_client.clone(),
        )
    )).await?;
    
    manager.register_protocol(Arc::new(
        ConsensusProtocol::with_llm(
            "agent_001",
            llm_client.clone(),
        )
    )).await?;
    
    // Execute task with specific protocol
    let result = manager.execute_task_with_protocol(
        "request_response",
        CollaborationContent::Mixed {
            text: "Analyze sales data".to_string(),
            data: serde_json::json!({
                "year": 2024,
                "quarter": "Q1"
            }),
        },
    ).await?;
    
    if result.success {
        println!("Success! Duration: {}ms", result.duration_ms);
        if let Some(data) = result.data {
            println!("Result: {}", data.to_text());
        }
        
        // Check LLM decision context
        if let Some(ctx) = result.decision_context {
            println!("LLM reasoning: {}", ctx.reasoning);
            println!("Confidence: {}", ctx.confidence);
        }
    }
    
    // Get statistics
    let stats = manager.stats().await;
    println!("Total tasks: {}", stats.total_tasks);
    println!("Success rate: {:.1}%", 
        (stats.successful_tasks as f64 / stats.total_tasks as f64) * 100.0
    );
    
    Ok(())
}

LLM Integration Helper

LLMProtocolHelper

Helper for integrating LLM with protocols.
let helper = LLMProtocolHelper::new("agent_001")
    .with_llm(llm_client.clone())
    .with_use_llm(true);

let content = helper.process_with_llm(
    &msg,
    "You are a collaboration agent. Process this message.",
).await?;

Mode Selection Guide

Use this guide to select appropriate modes:
Task TypeRecommended ModeReason
Data QueryRequestResponseDeterministic, needs explicit answer
BrainstormingPublishSubscribeMultiple perspectives needed
ApprovalConsensusAgreement required
Code ReviewDebateIterative refinement
AnalysisParallelIndependent sub-tasks
PipelineSequentialDependencies between steps

Source Reference

  • Protocol implementations: ~/workspace/source/crates/mofa-foundation/src/collaboration/mod.rs
  • Type definitions: ~/workspace/source/crates/mofa-foundation/src/collaboration/types.rs

Build docs developers (and LLMs) love