Skip to main content
Aurora OS employs a dual-target architecture that runs both natively on bare metal hardware and in browsers via WebAssembly. This page explains how code is shared, where implementations diverge, and the design principles that enable this approach.

Core principle

The hybrid design follows a simple rule:
Pure computation compiles to WASM and runs on both targets. Hardware-dependent code stays native-only, with browser equivalents in JavaScript.

Code partitioning

Pure computation modules compile to both native and WASM:
  • Scheduler logic: CFS vruntime calculation, priority weights
  • Policy engines: Security policy evaluation, capability validation
  • Agent runtime: Sandboxed agent execution (no I/O)
  • Crypto primitives: Hashing, encryption (uses WebCrypto in browser)
  • Data structures: Red-black trees, hash tables, queues
  • Algorithms: Path resolution, compression, parsing
Implementation:
// wasm-runtime/bridge/kernel-wasm.c:20
#include <emscripten.h>

EMSCRIPTEN_KEEPALIVE
int wasm_schedule_next(int current_pid, int *priorities, int count) {
    // Pure computation: no syscalls, no I/O, no hardware access
    int min_vruntime = INT_MAX;
    int next_pid = -1;
    
    for (int i = 0; i < count; i++) {
        if (vruntime[i] < min_vruntime) {
            min_vruntime = vruntime[i];
            next_pid = i;
        }
    }
    
    return next_pid;
}
Compiles to:
  • Native: x86_64-elf-gcc -c -o schedule.o schedule.c
  • WASM: emcc -O2 -o schedule.wasm schedule.c

Abstraction layers

Aurora OS uses abstraction layers to hide target differences:

HAL — Hardware Abstraction Layer

// kernel/include/hal.h:20
struct hal_ops {
    void (*init)(void);
    void (*enable_interrupts)(void);
    void (*disable_interrupts)(void);
    void (*set_page_table)(uintptr_t root);
    void (*flush_tlb)(uintptr_t virt);
    uint64_t (*read_timestamp)(void);
    void (*context_switch)(struct task *prev, struct task *next);
};

extern struct hal_ops hal;
Implementation per target:
// Native x86_64: kernel/arch/x86_64/hal.c:15
struct hal_ops hal = {
    .init = x86_64_init,
    .enable_interrupts = sti,
    .disable_interrupts = cli,
    .set_page_table = load_cr3,
    .flush_tlb = invlpg,
    .read_timestamp = rdtsc,
    .context_switch = x86_64_context_switch,
};

// Browser WASM: wasm-runtime/bridge/hal-wasm.c:10
struct hal_ops hal = {
    .init = wasm_init,
    .enable_interrupts = wasm_noop,        // No interrupts in WASM
    .disable_interrupts = wasm_noop,
    .set_page_table = wasm_set_page_table, // Virtual page table
    .flush_tlb = wasm_noop,                // No TLB in WASM
    .read_timestamp = wasm_performance_now,
    .context_switch = wasm_context_switch, // Pure register copy
};
Usage (same code, different target):
// kernel/src/proc/sched.c:80
void schedule(void) {
    struct task *next = pick_next_task();
    
    hal.disable_interrupts();    // x86_64: CLI instruction
                                  // WASM: no-op
    
    hal.context_switch(current, next);  // x86_64: assembly
                                         // WASM: register copy
    
    hal.enable_interrupts();     // x86_64: STI instruction
                                  // WASM: no-op
}
Implementation: kernel/include/hal.h, kernel/arch/*/hal.c

Syscall interface

Both targets implement the same syscall API:
// Syscall signature (same for native and WASM)
ssize_t sys_write(int fd, const void *buf, size_t count);
int sys_fork(void);
int sys_open(const char *path, int flags);
// ... 38 syscalls
Native implementation:
// kernel/src/syscall/sys_write.c:12
ssize_t sys_write(int fd, const void *buf, size_t count) {
    struct file *file = current->fds[fd];
    if (!file) return -EBADF;
    
    return file->ops->write(file, buf, count);
}
WASM/Browser implementation:
// wasm-runtime/pwa/bpe/bpe-syscall.js:50
sys_write(fd, buf, count) {
    const file = this.bpe.procMgr.current.fds[fd];
    if (!file) return -EBADF;
    
    if (fd === 1) {  // stdout
        console.log(new TextDecoder().decode(buf.slice(0, count)));
        return count;
    }
    
    // Write to OPFS
    return this.bpe.vfs.write(file.inode, buf, count);
}
User code (same for both targets):
// userland/shell/main.c:25
int main(void) {
    const char *msg = "Hello from Aurora OS\n";
    syscall(SYS_WRITE, 1, msg, strlen(msg));  // Works on native + browser
    return 0;
}

Build system

The Makefile targets different outputs:
# Native kernel ELF
build: rust-caps $(KERNEL_BIN)
	$(LD) -T kernel/kernel.ld -o $(KERNEL_BIN) $(KERNEL_OBJS) $(RUST_LIB)

# WASM module
wasm:
	emcc wasm-runtime/bridge/kernel-wasm.c \
	    -o wasm-runtime/pwa/kernel.js \
	    -s EXPORTED_RUNTIME_METHODS='["ccall","cwrap"]' \
	    -s MODULARIZE=1 -s EXPORT_NAME='AuroraKernel' \
	    -O2

# ISO for QEMU/hardware
iso: build
	xorriso -as mkisofs -b boot/limine/limine-bios-cd.bin \
	    -o $(BUILD_DIR)/aurora-os.iso $(ISO_DIR)
Targets:
  • make builddist/aurora-kernel.elf (native x86_64)
  • make wasmwasm-runtime/pwa/kernel.wasm (browser)
  • make isodist/aurora-os.iso (bootable ISO)
Implementation: Makefile

Data flow comparison

Native kernel

User program
    ↓ (SYSCALL instruction)
Syscall entry (assembly)
    ↓ (function call)
Syscall handler (C)
    ↓ (VFS layer)
File system driver (tmpfs, ext2)
    ↓ (device driver)
Hardware (SATA, NVMe)
Latency: ~100 ns (SYSCALL) + ~1 μs (VFS) + ~50 μs (disk I/O) = ~51 μs

Browser runtime

User program (JavaScript)
    ↓ (function call)
BPE syscall handler (JavaScript)
    ↓ (async call)
BPE VFS (JavaScript)
    ↓ (browser API)
OPFS (browser storage)
Latency: ~50 μs (JS call) + ~100 μs (VFS) + ~2 ms (OPFS) = ~2.15 ms
Browser is ~40× slower for I/O due to async API overhead, but this is acceptable for a development/demo environment.

Shared components

Scheduler logic

CFS vruntime calculation compiles to WASMNative: kernel/src/proc/sched.c:schedule() WASM: wasm-runtime/bridge/kernel-wasm.c:wasm_schedule()

Capability system

Rust no_std crate compiles to both targetsNative: Links into kernel ELF WASM: Compiles to wasm32-unknown-unknown

Crypto primitives

Hash, encrypt, sign functionsNative: Uses Rust crypto crates WASM: Delegates to WebCrypto API

Agent runtime

Sandboxed execution engineNative: Uses seccomp-bpf syscall filter WASM: Uses WASM sandbox isolation

Design principles

  1. Minimize duplication: Share code wherever possible via WASM compilation
  2. Clean abstractions: HAL hides hardware differences from kernel code
  3. Identical APIs: Syscall interface identical on both targets
  4. Accept tradeoffs: Browser is slower but enables rapid development
  5. Target-specific optimizations: Use native instructions where available (SIMD, AES-NI)

Testing strategy

1

Browser testing

Develop and test features rapidly in browser with instant reloadCommand: Open wasm-runtime/pwa/index.html in browser
2

WASM unit tests

Test WASM-compiled modules in isolationCommand: node wasm-runtime/test/test-wasm.js
3

Native unit tests

Test native kernel modules in QEMUCommand: make test
4

Integration tests

Test full system in QEMU with PlaywrightCommand: npm test (runs Playwright against QEMU)
5

Real hardware

Final validation on bare metalCommand: make iso && dd if=dist/aurora-os.iso of=/dev/sdX
See Testing for details.

Performance comparison

BenchmarkNativeBrowserRatio
Context switch2 μs50 μs25×
Syscall overhead100 ns50 μs500×
Memory allocation (kmalloc)200 ns5 μs25×
File read (4 KiB)50 μs2 ms40×
Process fork50 μs500 μs10×
Boot time3 s2 s0.67× (browser faster!)
Browser is slower for most operations but faster to boot (no hardware init). Use browser for development, native for production.

Limitations

The hybrid design has inherent limitations: Cannot share:
  • Inline assembly (x86_64 instructions)
  • Direct hardware access (MMIO, I/O ports)
  • Interrupt handlers (hardware interrupts don’t exist in WASM)
  • Memory barriers (WASM is single-threaded)
  • Atomic operations (WASM threads are experimental)
Workarounds:
  • Use HAL for hardware abstractions
  • Emulate interrupts as timer callbacks in browser
  • Use async I/O instead of blocking in browser
  • Implement atomic ops via browser APIs (Atomics.add, etc.)
See docs/browser-limitations.md for full list.

Future improvements

Use WASI (WebAssembly System Interface) for portable syscalls:
#include <wasi/api.h>

wasi_fd_t fd = wasi_path_open("file.txt", O_RDONLY);
wasi_fd_read(fd, buf, sizeof(buf));
Benefits:
  • Standard syscall interface for WASM
  • Portable across WASM runtimes (wasmtime, wasmer, browser)
  • Security: capability-based file access
Compile Rust drivers to both native and WASM:
#![no_std]

#[cfg(target_arch = "x86_64")]
use native_hal as hal;

#[cfg(target_arch = "wasm32")]
use wasm_hal as hal;

pub fn driver_init() {
    hal::register_interrupt(33, keyboard_handler);
}
Benefits:
  • Type safety
  • Better WASM support than C
  • Fearless concurrency (when WASM threads stabilize)
Run identical tests on both targets:
// tests/shared/test_vfs.c
void test_vfs_read() {
    int fd = open("/test.txt", O_RDONLY);
    char buf[100];
    ssize_t n = read(fd, buf, sizeof(buf));
    assert(n > 0);
    close(fd);
}
Compile to:
  • Native: Links into QEMU test runner
  • WASM: Runs in Node.js with BPE/U

Next steps

Kernel architecture

Deep dive into native kernel

BPE/U engine

Explore browser runtime

Building

Build both targets from source

Testing

Test suite for both targets

Build docs developers (and LLMs) love