MoFA Agent Framework
The first production-grade framework to achieve “write once, run everywhere” across languages, built for extreme performance, boundless extensibility, and runtime programmability.
What is MoFA?
MoFA (Modular Framework for Agents) is a revolutionary AI agent framework built in Rust that combines blazing performance with dynamic flexibility. Through its microkernel architecture and innovative dual-layer plugin system, MoFA strikes the perfect balance between raw performance and runtime adaptability.Quick start
Get from zero to a running agent in under 10 minutes
Installation
Install MoFA and set up your development environment
Architecture
Learn about the microkernel and dual-layer plugin system
Examples
Browse 27+ ready-to-run examples
What sets MoFA apart
Extreme performance
Built in Rust with zero-cost abstractions and memory safety without garbage collection. Orders of magnitude faster than Python-based frameworks.
Write once, run everywhere
Auto-generated bindings for Python, Java, Go, Kotlin, and Swift via UniFFI. Call Rust core logic natively from any supported language with near-zero overhead.
Dual-layer plugins
Combine compile-time Rust/WASM plugins for extreme performance with runtime Rhai scripts for hot-reloadable business logic. No recompilation needed for updates.
Runtime programmability
Embedded Rhai scripting engine enables hot-reload of business logic, runtime configuration, rule adjustments, and user-defined extensions on the fly.
Microkernel architecture
Clean separation of concerns with a minimal, stable kernel and modular components. Easy to extend and maintain.
Distributed by nature
Built on Dora-rs for distributed dataflow. Seamless cross-process, cross-machine agent communication. Edge computing ready.
Core features
Multi-agent coordination
MoFA supports 7 LLM-driven collaboration patterns:- Request-Response: One-to-one deterministic tasks with synchronous replies
- Publish-Subscribe: One-to-many broadcast tasks with multiple receivers
- Consensus: Multi-round negotiation and voting for decision-making
- Debate: Multi-agent alternating discussion for quality improvement
- Parallel: Simultaneous execution with result aggregation
- Sequential: Pipeline execution where output flows to the next agent
- Custom: User-defined modes interpreted by the LLM
ReAct pattern
Implement agents that combine reasoning and acting through a “Think-Act-Observe” cycle. Built-in tools for web search, calculations, string manipulation, JSON processing, and datetime operations.Secretary mode
Human-in-the-loop workflow management with 5 phases:- Receive ideas → Record todos
- Clarify requirements → Project documents
- Schedule dispatch → Call execution agents
- Monitor feedback → Push key decisions to humans
- Acceptance report → Update todos
LLM integration
Standardized LLM abstraction layer with built-in support for:- OpenAI (GPT-4, GPT-4o)
- Anthropic (Claude)
- Google Gemini
- Any OpenAI-compatible endpoint (Ollama, vLLM, OpenRouter)
Persistence layer
Multiple database backends:- PostgreSQL
- MySQL
- SQLite
- In-memory storage
Architecture overview
MoFA uses a layered microkernel architecture:Dual-layer plugin advantages
Compile-time plugins (Rust/WASM):- Extreme performance, zero runtime overhead
- Type safety, compile-time error checking
- Support complex system calls and native integration
- WASM sandbox provides secure isolation
- No recompilation needed, instant effect
- Business logic hot updates
- User-defined extensions
- Secure sandbox execution with configurable resource limits
- Use Rust plugins for performance-critical paths (LLM inference, data processing)
- Use Rhai scripts for business logic (rule engines, workflow orchestration)
- Seamless interoperability between both layers
Why Rust?
Memory safety
No garbage collection, no null pointer exceptions, no data races
Zero-cost abstractions
High-level features with C-like performance
Fearless concurrency
Actor-model concurrency via Ractor for high-load workloads
Use cases
- Research assistants
- Business automation
- Multi-agent systems
- Edge computing
- Production applications
Build agents that search the web, query Wikipedia, perform calculations, and synthesize information from multiple sources.
Community and support
GitHub
Star the repo, report issues, and contribute
Discord
Join our community for help and discussions
GSoC 2026
Participate in Google Summer of Code with MoFA
API docs
Browse the complete Rust API documentation
Next steps
Install MoFA
Set up Rust and install the MoFA frameworkInstallation guide →
Build your first agent
Follow the quick start guide to create a working agentQuick start →
Explore examples
Browse 27+ examples covering all framework featuresView examples →
Join the community
Get help and share your projectsJoin Discord →
License: MoFA is open source under the Apache License 2.0