Skip to main content
The flora runtime is a Rust service that hosts Discord connectivity, V8 isolates, and an HTTP API. It bridges Discord events into JavaScript execution contexts with strict isolation and resource limits.

Boot Flow

At startup, the runtime initializes in this order:
  1. Load config - Reads config.toml and environment variables
  2. Connect to Postgres - Runs migrations via SQLx
  3. Connect to Redis - Initializes cache client with exponential backoff reconnect policy
  4. Initialize V8 - Once per process via v8_init::init()
  5. Load SDK bundle - Bundles the flora SDK into the default runtime for all workers
  6. Restore deployments - Loads cached guild scripts from the database
  7. Start services - Launches Discord client and HTTP server concurrently
The runtime uses a current-thread Tokio runtime per worker (apps/runtime/src/runtime/worker.rs:94-97), not a multi-threaded pool. This avoids V8 thread safety issues.

Worker Pool

The BotRuntime manages a pool of worker threads:
pub struct BotRuntime {
    workers: Vec<Worker>,
    num_workers: usize,
    secrets: Arc<SecretService>,
    guild_routes: Arc<parking_lot::Mutex<HashMap<String, usize>>>,
    migration_queues: Arc<Mutex<HashMap<String, Vec<QueuedGuildEvent>>>>,
}

Worker Initialization

Workers are initialized sequentially to avoid V8 race conditions (apps/runtime/src/runtime/mod.rs:68-75):
for (i, worker) in self.workers.iter().enumerate() {
    worker.initialize().await?;
    info!(target: "flora:runtime", worker_id = i, "worker initialized");
}
Each worker:
  • Runs in its own OS thread
  • Maintains a map of guild ID → isolate state
  • Hosts a default runtime with the SDK bundle
  • Processes commands via an unbounded channel

Guild Routing

Guilds are assigned to workers using consistent hashing (apps/runtime/src/runtime/mod.rs:90-110):
fn default_worker_for_guild(&self, guild_id: &str) -> usize {
    let mut hasher = DefaultHasher::new();
    guild_id.hash(&mut hasher);
    (hasher.finish() as usize) % self.num_workers
}
Once assigned, the mapping is stored in guild_routes for fast lookups.

Event Dispatch

Discord Event Flow

  1. Gateway event arrives at DiscordHandler::dispatch() (apps/runtime/src/discord_handler.rs:26)
  2. Event serialized to JSON payload (apps/runtime/src/discord_handler.rs:66-72)
  3. Guild ID extracted; non-guild events are dropped (apps/runtime/src/discord_handler.rs:74-77)
  4. Event routed to appropriate worker (apps/runtime/src/runtime/mod.rs:179-207)
  5. Worker dispatches to guild isolate’s event handlers
  6. Handlers registered via on() execute within timeout
  7. Ops map back to Discord REST calls

Dispatch Timeout

Each event handler has a configurable timeout (default 3s, configured via RUNTIME_DISPATCH_TIMEOUT_SECS):
on('messageCreate', async (msg) => {
  // This handler must complete within 3 seconds
  await msg.reply('Hello!')
})
If the handler exceeds the timeout:
  • The isolate’s event loop is terminated
  • An error is logged
  • The event is dropped
Long-running operations in event handlers will cause timeouts. Use cron jobs for background tasks.

Event Queueing During Migration

When a guild runtime is being migrated to another worker, incoming events are queued (apps/runtime/src/runtime/mod.rs:186-199):
if self.enqueue_migrating_event(gid, QueuedGuildEvent {
    event: event.to_string(),
    payload: payload.clone(),
}).await {
    return Ok(()); // Event queued
}
After migration completes, queued events are replayed in order (apps/runtime/src/runtime/mod.rs:164-174).

Isolate Lifecycle

Creation

When a guild script is deployed (apps/runtime/src/runtime/worker.rs:152-217):
  1. Worker receives DeployGuild command
  2. Creates new JsRuntime with ops extension
  3. Injects guild-scoped state (HTTP client, KV service, secrets)
  4. Executes runtime prelude (SDK setup)
  5. Loads and evaluates guild script bundle
  6. Extracts dispatch function from script
  7. Runs event loop until quiescence
If boot exceeds timeout, the isolate is terminated and an error is returned.

State Management

Each isolate maintains:
pub struct JsRuntimeState {
    pub runtime: JsRuntime,
    pub dispatch_fn: Option<Global<v8::Function>>,
    pub last_dispatch_end: Instant,
}
  • runtime - The Deno Core JS runtime
  • dispatch_fn - Cached reference to the script’s dispatch function
  • last_dispatch_end - Timestamp for metrics and debugging

Termination

Isolates are terminated when:
  • A new deployment replaces the script
  • The bot leaves the guild
  • A fatal error occurs during execution
  • Migration moves the guild to another worker
Termination uses JsRuntime::v8_isolate().terminate_execution() to forcefully stop the event loop.

Cron Scheduler

Each worker runs a per-second cron tick (apps/runtime/src/runtime/worker.rs:102-116):
loop {
    tokio::select! {
        _ = cron_interval.tick() => {
            run_cron_tick(
                &cron_registry,
                &mut guild_runtimes,
                &mut default_runtime,
                worker_id,
                &limits,
            ).await;
        }
        // ...
    }
}

Cron Job Registry

The CronRegistry tracks jobs per guild (apps/runtime/src/ops/cron.rs):
  • Stored in Arc<parking_lot::Mutex<CronRegistry>>
  • Keyed by guild ID
  • Uses the croner crate for POSIX/Vixie-cron compatible parsing
  • Enforces per-guild job limit (default 32)

Cron Execution

When a cron job is due:
  1. Registry checks next_run timestamp
  2. If due and not running (or skipIfRunning: false), dispatches synthetic event
  3. Event type is __cron:<name>
  4. Handler executes with cron-specific timeout (default 5s)
  5. is_running flag cleared on completion
Cron jobs are not persisted. They are re-registered when the script loads (apps/www/limitations.md:12-30).

KV Store

The KV store is scoped per guild and per store name:

Storage Backend

  • Sled: Embedded key-value database on disk (data/kv)
  • Postgres: Metadata index (store names, key counts)

Constraints

ConstraintLimit
Value size1 MB
Key length512 characters
Store name64 characters
List default limit100
List max limit1000

Features

  • Optional TTL per key
  • Optional metadata (any JSON value)
  • Prefix filtering
  • Cursor-based pagination
Example usage:
// Get a value
const value = await kv.get('user:123')

// Set with TTL
await kv.set('session:abc', { userId: '123' }, {
  ttl: 3600,
  metadata: { createdAt: Date.now() }
})

// List with prefix
const keys = await kv.list({ prefix: 'user:' })

Logs and Metrics

The runtime exposes logs via the HTTP API:
  • Query logs: GET /logs with filters (guild, level, timestamp)
  • Stream logs: GET /logs/stream using Server-Sent Events
  • CLI streaming: flora logs -f follows logs in real time
Logs are captured via a custom LogSink (apps/runtime/src/log_sink.rs) that intercepts script console output and runtime tracing.

Configuration

Runtime behavior is controlled via flora_config::RuntimeConfig (crates/flora_config/src/lib.rs:79-114):
[runtime]
# Worker pool size (1-64)
max_workers = 4

# Isolate initialization timeout (seconds)
boot_timeout_secs = 5

# Module/script load timeout (seconds)
load_timeout_secs = 30

# Per-event dispatch timeout (seconds)
dispatch_timeout_secs = 3

# Max script size (bytes, default 8MB)
max_script_bytes = 8388608

# Max deployment files
max_bundle_files = 200

# Max bundle source bytes (default 1MB)
max_bundle_total_bytes = 1048576

# Max cron jobs per guild
max_cron_jobs = 32

# Cron handler timeout (seconds)
cron_timeout_secs = 5

# Migration quiesce timeout (milliseconds)
migration_timeout_ms = 500
All values can be overridden via environment variables (e.g., RUNTIME_MAX_WORKERS=8).

Build docs developers (and LLMs) love