Architecture
Motia consists of two primary components that work together to provide a distributed backend orchestration system:- Engine: The central orchestration server that coordinates workers, manages state, and routes invocations
- Workers: SDK-based processes that register functions and respond to invocations
Core Components
Engine
The engine is a Rust-based binary that provides:- WebSocket Server (port 49134): Accepts worker connections and manages the function registry
- REST API (port 3111): Exposes registered functions as HTTP endpoints
- Stream API (port 3112): Real-time state synchronization over WebSocket
- Metrics (port 9464): Prometheus-compatible metrics endpoint
Workers
Workers connect to the engine via WebSocket and:- Register functions with unique IDs
- Register triggers (HTTP routes, cron schedules, queue subscriptions)
- Execute function invocations and return results
- Can be written in any language with an SDK (Node.js, Python, Rust)
Deployment Models
Development
For local development, run the engine directly:Docker
Use Docker for containerized deployments:Production
Production deployments should include:- Reverse proxy (Caddy, Nginx) for TLS termination
- External Redis for state persistence
- External RabbitMQ for queue module (optional)
- Hardened container security settings
- Environment-based configuration
- Monitoring and observability
Stateful Dependencies
Certain modules require external services:| Module | Dependency | Purpose | Required |
|---|---|---|---|
| Queue | Redis | Message queue backend | Optional |
| Stream | Redis | State synchronization | Optional |
| Cron | KV Store | Distributed lock coordination | No (uses file-based KV) |
| State | KV Store | Persistent state storage | No (uses file-based KV) |
Scaling Considerations
Horizontal Scaling
- Engine: Can run multiple instances behind a load balancer for HTTP traffic
- Workers: Scale independently by adding more worker processes
- WebSocket: Sticky sessions required for worker connections
Vertical Scaling
- Engine is lightweight (written in Rust, runs on distroless container)
- Memory usage scales with number of active workers and in-flight invocations
- CPU usage depends on invocation throughput and module configuration
Next Steps
Docker Deployment
Deploy using Docker Compose with full stack
Production Setup
Security hardening and production best practices
Configuration
Module configuration and environment variables