Skip to main content
ZeroClaw Framework

Zero overhead. Zero compromise. 100% Rust. 100% Agnostic.

ZeroClaw is the runtime framework for agentic workflows — infrastructure that abstracts models, tools, memory, and execution so agents can be built once and run anywhere.

Ultra-Lightweight

Runs on any hardware with < 5MB RAM. That’s 99% less memory than TypeScript alternatives and 98% cheaper deployment.

Blazing Fast

< 10ms cold start on edge hardware (0.8GHz). Single-binary Rust runtime keeps startup near-instant.

Trait-Driven Architecture

Swap providers, channels, tools, memory backends, and runtime adapters without touching core code.

Multi-Platform

One binary workflow across ARM, x86, RISC-V. Deploy on cloud VMs, Raspberry Pi, microcontrollers, or bare metal.

Why Teams Choose ZeroClaw

Small Rust binary with fast startup and minimal memory footprint. Perfect for resource-constrained environments where every megabyte counts.
Built-in pairing, strict sandboxing, explicit allowlists, and workspace scoping. Security isn’t bolted on—it’s architectural.
Core systems are traits: providers, channels, tools, memory, tunnels. Extend without forking. No vendor lock-in.
Drop-in support for OpenAI-compatible endpoints plus pluggable custom providers. Use any model from any provider.

Hardware Support

ZeroClaw runs on an exceptionally wide range of hardware platforms:

Microcontrollers

  • STM32 Nucleo boards
  • Arduino (via zeroclaw-arduino)
  • ESP32 (with UI support)

Single-Board Computers

  • Raspberry Pi (with GPIO support)
  • Any ARM/ARM64 Linux board
  • RISC-V development boards

Cloud & Desktop

  • x86_64 Linux/macOS/Windows
  • ARM64 cloud instances
  • Docker/Podman containers

Performance Comparison

Local machine benchmark (macOS arm64, Feb 2026) normalized for 0.8GHz edge hardware:
MetricOpenClawNanoBotPicoClawZeroClaw
LanguageTypeScriptPythonGoRust
RAM Usage> 1GB> 100MB< 10MB< 5MB
Startup (0.8GHz)> 500s> 30s< 1s< 10ms
Binary Size~28MBN/A~8MB~8.8 MB
Min Hardware Cost$599 (Mac mini)~$50 (SBC)$10 (board)Any
ZeroClaw results measured on release builds using /usr/bin/time -l. OpenClaw requires Node.js runtime (~390MB overhead). RAM figures are runtime memory; build-time requirements are higher (~2GB RAM + 6GB disk for source builds).

Architecture Highlights

Trait-Driven Design

Every major subsystem is defined by a trait interface:
// From src/providers/traits.rs
pub trait Provider: Send + Sync {
    async fn chat(&self, request: ChatRequest) -> Result<ChatResponse>;
    fn name(&self) -> &str;
    fn supports_streaming(&self) -> bool;
}

// From src/channels/traits.rs
pub trait Channel: Send + Sync {
    async fn send(&self, message: &str) -> Result<()>;
    async fn listen(&self) -> Result<String>;
    fn name(&self) -> &str;
}

// From src/tools/traits.rs
pub trait Tool: Send + Sync {
    async fn execute(&self, params: ToolParams) -> Result<ToolResult>;
    fn schema(&self) -> ToolSchema;
}
Implement a trait, register in the factory, and you’re done. No core rewrites needed.

Research Phase

Proactive information gathering through tools before response generation—reduces hallucinations by fact-checking first.

Secure Runtime

  • Pairing-based gateway authentication with OTP support
  • Workspace scoping prevents unauthorized file access
  • Explicit allowlists for domains and commands
  • Sandboxing support (Landlock on Linux, Bubblewrap)
  • Secret encryption using ChaCha20-Poly1305

Supported Providers

Major Providers

  • Anthropic (Claude)
  • OpenAI (GPT-4, GPT-5)
  • Google Gemini
  • OpenRouter (multi-model gateway)
  • Groq, DeepSeek, Mistral

Regional & Specialized

  • GLM/Zhipu (China)
  • Qwen/DashScope (Alibaba)
  • Minimax, Moonshot (Kimi)
  • Volcengine (Doubao)
  • Venice, Together AI, Fireworks

Self-Hosted & Local

  • Ollama (local models)
  • llama.cpp (GGUF models)
  • vLLM, SGLang (inference servers)
  • Custom OpenAI-compatible endpoints

Supported Channels

Connect your agent to communication platforms:
  • Chat: Telegram, Discord, Slack, Matrix (E2EE), Mattermost
  • Enterprise: Microsoft Teams, Lark/Feishu, DingTalk, Nextcloud Talk
  • Mobile: WhatsApp, Signal, QQ, WeChat
  • Email: IMAP/SMTP (async-imap, lettre)
  • Gateway: Built-in HTTP webhook server with SSE streaming

Built-In Tools

System Tools

  • Shell command execution
  • File operations (read/write/search)
  • Directory navigation
  • Process management

Web & Network

  • Web search (DuckDuckGo, Brave)
  • HTTP requests
  • Web page fetching (HTML to Markdown)
  • Browser automation (Selenium/CDP)

Integrations

  • GitHub API
  • Composio tool hub
  • Pushover notifications
  • Custom WASM plugins

Hardware

  • GPIO control (Raspberry Pi)
  • Serial communication (STM32, Arduino)
  • USB device enumeration
  • Peripheral tool delegation

Memory Backends

  • SQLite (default, embedded)
  • PostgreSQL (distributed)
  • Markdown (human-readable, git-friendly)
  • Lucid (high-performance vector search)
  • Embeddings (semantic similarity with configurable providers)

What’s Next?

Quick Start

Get your first agent running in under 5 minutes

Installation Guide

Detailed installation for all platforms

Configuration

Configure providers, channels, and runtime options

API Reference

Dive into traits, schemas, and advanced usage

Official Repository: github.com/zeroclaw-labs/zeroclawDual-Licensed: MIT OR Apache-2.0 for maximum openness and commercial compatibility.

Build docs developers (and LLMs) love