Overview
MoFA provides a comprehensive set of coordination protocols for multi-agent collaboration. All protocols support optional LLM integration for intelligent decision-making and message processing.
Architecture
┌─────────────────────────────────────────────────────────────┐
│ LLM-Driven Collaboration Architecture │
├─────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐│
│ │ Task Analysis│────▶│Mode Selection│────▶│Protocol Exec ││
│ │ (LLM) │ │ (LLM) │ │(LLM-Assisted)││
│ └──────────────┘ └──────────────┘ └──────────────┘│
└─────────────────────────────────────────────────────────────┘
Core Types
CollaborationMode
Defines the communication pattern between agents.
pub enum CollaborationMode {
RequestResponse , // One-to-one deterministic tasks
PublishSubscribe , // One-to-many broadcast
Consensus , // Multi-agent agreement
Debate , // Iterative refinement
Parallel , // Concurrent execution
Sequential , // Pipeline processing
Custom ( String ), // LLM-interpreted custom mode
}
RequestResponse Synchronous one-to-one communication with explicit return. Best for: data queries, deterministic tasks, simple Q&A.
PublishSubscribe Asynchronous one-to-many broadcast. Best for: event propagation, creative generation, notifications.
Consensus Multi-round negotiation and voting. Best for: decision-making, proposal selection, quality review.
Debate Turn-based discussion with refinement. Best for: code review, solution optimization, dispute resolution.
Parallel Simultaneous execution with aggregation. Best for: data analysis, batch processing, distributed search.
Sequential Serial execution of dependent tasks. Best for: pipeline processing, phased workflows.
CollaborationMessage
Message format for agent communication.
pub struct CollaborationMessage {
pub id : String ,
pub sender : String ,
pub receiver : Option < String >,
pub topic : Option < String >,
pub content : CollaborationContent ,
pub mode : CollaborationMode ,
pub timestamp : u64 ,
pub metadata : HashMap < String , String >,
}
Unique message identifier (UUID v7)
Target agent ID (None for broadcast)
Topic for publish-subscribe mode
content
CollaborationContent
required
Message content (LLM-understandable)
mode
CollaborationMode
required
Collaboration mode
Builder Methods:
let msg = CollaborationMessage :: new (
"agent_001" ,
"Analyze this dataset" ,
CollaborationMode :: RequestResponse ,
)
. with_receiver ( "agent_002" )
. with_topic ( "data_analysis" )
. with_metadata ( "priority" . to_string (), "high" . to_string ());
CollaborationContent
Message content supporting multiple formats.
pub enum CollaborationContent {
Text ( String ),
Data ( serde_json :: Value ),
Mixed { text : String , data : serde_json :: Value },
LLMResponse {
reasoning : String ,
conclusion : String ,
data : serde_json :: Value ,
},
}
Plain natural language text CollaborationContent :: Text ( "Process this data" . to_string ())
Structured JSON data CollaborationContent :: Data ( serde_json :: json! ({
"dataset" : "sales_2024.csv" ,
"operation" : "analyze"
}))
Combined text and data CollaborationContent :: Mixed {
text : "Analyze sales data" . to_string (),
data : serde_json :: json! ({ "year" : 2024 }),
}
LLM-generated response with reasoning CollaborationContent :: LLMResponse {
reasoning : "Analysis shows..." . to_string (),
conclusion : "Recommendation: ..." . to_string (),
data : serde_json :: json! ({ "confidence" : 0.95 }),
}
CollaborationResult
Execution result with LLM decision context.
pub struct CollaborationResult {
pub success : bool ,
pub data : Option < CollaborationContent >,
pub error : Option < String >,
pub duration_ms : u64 ,
pub participants : Vec < String >,
pub mode : CollaborationMode ,
pub decision_context : Option < DecisionContext >,
}
Whether execution succeeded
data
Option<CollaborationContent>
Result data
Execution time in milliseconds
IDs of participating agents
LLM’s decision information
DecisionContext
Records LLM’s reasoning for protocol selection.
pub struct DecisionContext {
pub reasoning : String ,
pub task_analysis : String ,
pub alternatives : Vec < CollaborationMode >,
pub confidence : f32 ,
}
LLM’s analysis of the task
Other modes LLM considered
Confidence level (0.0 - 1.0)
Protocol Trait
CollaborationProtocol
Core trait all protocols must implement.
#[async_trait]
pub trait CollaborationProtocol : Send + Sync {
fn name ( & self ) -> & str ;
fn mode ( & self ) -> CollaborationMode ;
fn description ( & self ) -> & str ;
fn applicable_scenarios ( & self ) -> Vec < String >;
async fn send_message ( & self , msg : CollaborationMessage ) -> GlobalResult <()>;
async fn receive_message ( & self ) -> GlobalResult < Option < CollaborationMessage >>;
async fn process_message (
& self ,
msg : CollaborationMessage ,
) -> GlobalResult < CollaborationResult >;
fn is_available ( & self ) -> bool { true }
fn stats ( & self ) -> HashMap < String , serde_json :: Value > { HashMap :: new () }
}
mode
fn() -> CollaborationMode
required
Collaboration mode
Human/LLM-readable description
Use cases for LLM to consider
Send collaboration message
Receive collaboration message
Process message and return result
Protocol Implementations
RequestResponseProtocol
One-to-one synchronous communication.
// Without LLM
let protocol = RequestResponseProtocol :: new ( "agent_001" );
// With LLM
let protocol = RequestResponseProtocol :: with_llm (
"agent_001" ,
llm_client . clone (),
);
let msg = CollaborationMessage :: new (
"agent_001" ,
"Query user data" ,
CollaborationMode :: RequestResponse ,
) . with_receiver ( "agent_002" );
let result = protocol . process_message ( msg ) . await ? ;
Use Cases:
Data queries and retrieval
Deterministic task execution
Status requests
Simple question-answering
PublishSubscribeProtocol
One-to-many asynchronous broadcast.
let protocol = PublishSubscribeProtocol :: with_llm (
"agent_001" ,
llm_client . clone (),
);
// Subscribe to topics
protocol . subscribe ( "events" . to_string ()) . await ? ;
protocol . subscribe ( "alerts" . to_string ()) . await ? ;
// Publish message
let msg = CollaborationMessage :: new (
"agent_001" ,
"System update available" ,
CollaborationMode :: PublishSubscribe ,
) . with_topic ( "events" );
protocol . send_message ( msg ) . await ? ;
Use Cases:
Event propagation
Creative brainstorming
Notification broadcasting
Multi-party collaboration
ConsensusProtocol
Multi-agent agreement through negotiation.
let protocol = ConsensusProtocol :: with_llm (
"agent_001" ,
llm_client . clone (),
);
let msg = CollaborationMessage :: new (
"agent_001" ,
"Approve this design proposal" ,
CollaborationMode :: Consensus ,
);
let result = protocol . process_message ( msg ) . await ? ;
Use Cases:
Decision-making
Voting and evaluation
Proposal selection
Quality review
DebateProtocol
Iterative refinement through discussion.
let protocol = DebateProtocol :: with_llm (
"agent_001" ,
llm_client . clone (),
);
let msg = CollaborationMessage :: new (
"agent_001" ,
"Review this code implementation" ,
CollaborationMode :: Debate ,
);
let result = protocol . process_message ( msg ) . await ? ;
Use Cases:
Code review
Solution optimization
Dispute resolution
Quality improvement
ParallelProtocol
Concurrent task execution.
let protocol = ParallelProtocol :: with_llm (
"agent_001" ,
llm_client . clone (),
);
let msg = CollaborationMessage :: new (
"agent_001" ,
"Analyze multiple datasets" ,
CollaborationMode :: Parallel ,
);
let result = protocol . process_message ( msg ) . await ? ;
Use Cases:
Data analysis
Batch processing
Distributed search
Parallel computation
SequentialProtocol
Serial execution of dependent tasks.
let protocol = SequentialProtocol :: with_llm (
"agent_001" ,
llm_client . clone (),
);
let msg = CollaborationMessage :: new (
"agent_001" ,
"Process pipeline stages" ,
CollaborationMode :: Sequential ,
);
let result = protocol . process_message ( msg ) . await ? ;
Use Cases:
Pipeline processing
Dependent task chains
Step-by-step execution
Phased workflows
Collaboration Manager
LLMDrivenCollaborationManager
Manages protocol selection and execution.
let manager = LLMDrivenCollaborationManager :: new ( "agent_001" );
// Register protocols
manager . register_protocol ( Arc :: new (
RequestResponseProtocol :: with_llm ( "agent_001" , llm_client . clone ())
)) . await ? ;
manager . register_protocol ( Arc :: new (
ParallelProtocol :: with_llm ( "agent_001" , llm_client . clone ())
)) . await ? ;
// Execute with specific protocol
let result = manager . execute_task_with_protocol (
"request_response" ,
"Process data query" ,
) . await ? ;
Create new manager pub fn new ( agent_id : impl Into < String >) -> Self
Register a protocol pub async fn register_protocol (
& self ,
protocol : Arc < dyn CollaborationProtocol >,
) -> GlobalResult <()>
execute_task_with_protocol
Execute task using specific protocol pub async fn execute_task_with_protocol (
& self ,
protocol_name : & str ,
content : impl Into < CollaborationContent >,
) -> GlobalResult < CollaborationResult >
Send collaboration message pub async fn send_message (
& self ,
msg : CollaborationMessage ,
) -> GlobalResult <()>
Receive collaboration message pub async fn receive_message (
& self
) -> GlobalResult < Option < CollaborationMessage >>
Get collaboration statistics pub async fn stats ( & self ) -> CollaborationStats
Protocol Registry
ProtocolRegistry
Registry for managing available protocols.
let registry = ProtocolRegistry :: new ();
// Register protocol
registry . register ( Arc :: new ( protocol )) . await ? ;
// Query protocols
let protocol = registry . get ( "request_response" ) . await ;
let all_protocols = registry . list_all () . await ;
let names = registry . list_names () . await ;
let descriptions = registry . get_descriptions () . await ;
Register protocol pub async fn register (
& self ,
protocol : Arc < dyn CollaborationProtocol >,
) -> GlobalResult <()>
Get protocol by name pub async fn get (
& self ,
name : & str ,
) -> Option < Arc < dyn CollaborationProtocol >>
Get all protocol descriptions for LLM pub async fn get_descriptions (
& self
) -> HashMap < String , ProtocolDescription >
Statistics
CollaborationStats
Aggregated collaboration metrics.
pub struct CollaborationStats {
pub total_tasks : u64 ,
pub successful_tasks : u64 ,
pub failed_tasks : u64 ,
pub mode_usage : HashMap < String , u64 >,
pub avg_duration_ms : f64 ,
pub llm_decisions : LLMDecisionStats ,
}
Successfully completed tasks
Usage count per collaboration mode
LLMDecisionStats
LLM-specific decision metrics.
pub struct LLMDecisionStats {
pub total_decisions : u64 ,
pub mode_selections : HashMap < String , u64 >,
pub avg_confidence : f32 ,
}
Complete Example
use mofa_foundation :: collaboration ::* ;
use mofa_foundation :: llm ::* ;
use std :: sync :: Arc ;
#[tokio :: main]
async fn main () -> GlobalResult <()> {
// Create LLM client
let provider = Arc :: new ( create_openai_provider ());
let llm_client = Arc :: new ( LLMClient :: new ( provider ));
// Create manager
let manager = LLMDrivenCollaborationManager :: new ( "agent_001" );
// Register protocols with LLM
manager . register_protocol ( Arc :: new (
RequestResponseProtocol :: with_llm (
"agent_001" ,
llm_client . clone (),
)
)) . await ? ;
manager . register_protocol ( Arc :: new (
ParallelProtocol :: with_llm (
"agent_001" ,
llm_client . clone (),
)
)) . await ? ;
manager . register_protocol ( Arc :: new (
ConsensusProtocol :: with_llm (
"agent_001" ,
llm_client . clone (),
)
)) . await ? ;
// Execute task with specific protocol
let result = manager . execute_task_with_protocol (
"request_response" ,
CollaborationContent :: Mixed {
text : "Analyze sales data" . to_string (),
data : serde_json :: json! ({
"year" : 2024 ,
"quarter" : "Q1"
}),
},
) . await ? ;
if result . success {
println! ( "Success! Duration: {}ms" , result . duration_ms);
if let Some ( data ) = result . data {
println! ( "Result: {}" , data . to_text ());
}
// Check LLM decision context
if let Some ( ctx ) = result . decision_context {
println! ( "LLM reasoning: {}" , ctx . reasoning);
println! ( "Confidence: {}" , ctx . confidence);
}
}
// Get statistics
let stats = manager . stats () . await ;
println! ( "Total tasks: {}" , stats . total_tasks);
println! ( "Success rate: {:.1}%" ,
( stats . successful_tasks as f64 / stats . total_tasks as f64 ) * 100.0
);
Ok (())
}
LLM Integration Helper
LLMProtocolHelper
Helper for integrating LLM with protocols.
let helper = LLMProtocolHelper :: new ( "agent_001" )
. with_llm ( llm_client . clone ())
. with_use_llm ( true );
let content = helper . process_with_llm (
& msg ,
"You are a collaboration agent. Process this message." ,
) . await ? ;
Mode Selection Guide
Use this guide to select appropriate modes:
Task Type Recommended Mode Reason Data Query RequestResponse Deterministic, needs explicit answer Brainstorming PublishSubscribe Multiple perspectives needed Approval Consensus Agreement required Code Review Debate Iterative refinement Analysis Parallel Independent sub-tasks Pipeline Sequential Dependencies between steps
Source Reference
Protocol implementations: ~/workspace/source/crates/mofa-foundation/src/collaboration/mod.rs
Type definitions: ~/workspace/source/crates/mofa-foundation/src/collaboration/types.rs