Daemon Architecture
The Mullvad daemon is the core security-critical component responsible for upholding the VPN client’s security guarantees. It’s designed as an asynchronous actor system that can handle multiple concurrent operations without blocking.Actor System Design
The daemon uses an actor-based architecture built on Tokio’s async runtime. This design allows the daemon to:- Service multiple frontend clients simultaneously
- Handle long-running operations without blocking
- Coordinate complex interactions between components
- Maintain responsiveness under all conditions
Core Actors
Component Responsibilities
Management Interface Server
The management interface is implemented using gRPC and defined inmullvad-management-interface/proto/management_interface.proto.
Key responsibilities:
- Accept and authenticate client connections
- Deserialize command messages from frontends
- Route commands to appropriate daemon actors
- Maintain event subscriptions for multiple clients
- Stream state changes, settings updates, and events to subscribers
Daemon Core
The main daemon actor coordinates all subsystems and maintains the application state. Manages:- Settings persistence to disk
- Target state (connect/disconnect user intent)
- Device and account state
- Custom relay lists
- Access methods for API connectivity
- Dispatch commands to appropriate subsystems
- Aggregate state from multiple actors
- Broadcast state change events to subscribed clients
- Coordinate interactions between components
- Prevent deadlocks through careful async orchestration
Account Manager
Handles all account-related operations:- Account creation and login
- Device registration and management
- Voucher redemption
- Account data retrieval
- Token generation for web authentication
Device Manager
Manages WireGuard device lifecycle:- Device creation on login
- WireGuard key generation and rotation
- Key rotation scheduling (configurable interval)
- Device listing and removal
- Public key management
API Runtime
A dedicated actor for managing all REST API communication: Features:- Connection pooling and reuse
- Shadowsocks proxy support for censorship resistance
- Request queuing and concurrent execution
- Connection resetting when tunnel state changes
- Offline state awareness (blocks requests when offline)
- Non-blocking operation (all requests can be dropped mid-flight)
Relay List Updater
Manages the relay server list:- Periodic updates from API
- Caching to local filesystem
- Parsing and validation
- Distribution to relay selector
- Version tracking
GeoIP Handler
Determines user’s exit location:- Queries location API when connected
- Caches location data
- Broadcasts location changes to frontends
Asynchronous Message Flow
The daemon processes commands asynchronously to maintain responsiveness. Here’s an example flow when updating relay constraints:- The management interface returns immediately after queueing the command
- Settings are persisted asynchronously
- The relay selector and TSM process updates independently
- State change events are broadcast to all connected clients
- No single operation blocks any other
Critical Paths and Dependencies
Several execution flows have complex dependencies that require careful coordination:API Access During Blocking States
The API must be reachable even when the tunnel is down (for login, relay list updates, etc.):- Firewall allows API endpoint traffic in all states
- API Runtime receives current endpoint from Tunnel State Machine
- TSM updates allowed endpoint when connecting/connected
- API Runtime never blocks TSM state transitions
Settings Changes Affecting Active Tunnel
When settings change that affect the current tunnel connection:- Daemon Core receives settings update command
- Settings persisted to disk (async, non-blocking)
- Relay Selector updated with new constraints
- TSM receives reconnect command if currently connected
- TSM tears down existing tunnel and establishes new one
- Frontend receives series of state transition events
Offline Detection Integration
The offline monitor affects multiple subsystems:- Tunnel State Machine: Pauses reconnection attempts when offline
- API Runtime: Queues requests when offline
- Relay List Updater: Defers updates when offline
Platform-Specific Initialization
Desktop (Linux/Windows/macOS)
Android
On Android, the daemon is initialized via JNI from the VpnService:iOS
The iOS app uses a different architecture with WireGuard-kit handling the tunnel, while the Mullvad layer provides account management and relay selection.Threading Model
- Main Runtime: Tokio multi-threaded runtime for async operations
- Management Interface: Runs on Tokio runtime, handles gRPC connections
- Daemon Actors: All run on the same Tokio runtime, communicate via channels
- Blocking Operations: Rare; when necessary, use
tokio::task::spawn_blocking
Error Handling
The daemon follows a fail-secure approach:- Errors in non-critical paths are logged but don’t crash the daemon
- Errors affecting security (firewall, tunnel) transition to error state
- Error state blocks all traffic to prevent leaks
- Recovery attempts are made automatically
- Unrecoverable errors require user intervention
State Persistence
The daemon persists state across restarts:- Settings: JSON file in app data directory
- Account/Device: Encrypted storage
- Relay List: Cached JSON for offline access
- Target State: Whether user wanted to be connected on shutdown
Shutdown Procedure
TUNNEL_STATE_MACHINE_SHUTDOWN_TIMEOUT) to ensure timely termination even if cleanup hangs.
Related Documentation
- Tunnel State Machine - State machine details
- Frontend Communication - How frontends interact with daemon
- Architecture Overview - High-level system architecture