Skip to main content
Get OneClaw up and running in minutes. This guide will take you from building the binary to running your first AI agent.

Prerequisites

1

Install Rust

OneClaw requires Rust 1.85 or later with edition 2024 support.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup update
Verify your installation:
rustc --version
# Should show: rustc 1.85.0 or later
2

Clone the repository

git clone https://github.com/nclamvn/oneclaw.git
cd oneclaw

Build and Run

1

Build the binary

Build OneClaw in release mode for optimal performance:
cargo build --release
This produces a ~3.4MB binary in target/release/oneclaw-core.
2

Run OneClaw

Start the interactive CLI:
cargo run --release -p oneclaw-core
You’ll see the OneClaw prompt:
oneclaw>
3

Try basic commands

Test the system with these commands:
# View system status
oneclaw> status

# Store a memory
oneclaw> remember The project started in February 2026

# Search memories
oneclaw> recall project start

# View available commands
oneclaw> help

Configuration

1

Create configuration file

Create config/default.toml in your working directory:
mkdir -p config
2

Configure an LLM provider

Choose your provider and add configuration. OneClaw supports 6 providers:
[security]
deny_by_default = true

[provider]
primary = "anthropic"
model = "claude-sonnet-4-20250514"
max_tokens = 1024
temperature = 0.3
fallback = ["ollama"]

[provider.keys]
# Or set ANTHROPIC_API_KEY environment variable
# anthropic = "sk-ant-..."
3

Set API keys (if using cloud providers)

Set environment variables for your chosen provider:
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."

# OpenAI
export OPENAI_API_KEY="sk-..."

# Google (Gemini)
export GOOGLE_API_KEY="AIza..."

# Or use the generic key
export ONECLAW_API_KEY="your-key-here"

First AI Interaction

With a provider configured, you can now use the ask command:
oneclaw> ask What is OneClaw?

# OneClaw will route your question to the configured LLM provider
# and return a response based on its training data

Example Session

Here’s a complete example session showing OneClaw’s key features:
# Start OneClaw
$ cargo run --release -p oneclaw-core

oneclaw> status
Runtime Status:
  Security: DefaultSecurity (deny-by-default)
  Memory: SqliteMemory (0 entries)
  Provider: Anthropic (claude-sonnet-4-20250514)
  Uptime: 0.12s

oneclaw> remember OneClaw is a 6-layer AI agent kernel for edge devices

oneclaw> recall agent kernel
Found 1 result:
  [2026-03-02 07:30] OneClaw is a 6-layer AI agent kernel for edge devices

oneclaw> ask What are the benefits of edge AI?
[LLM Response: Benefits of edge AI include low latency, data privacy, 
offline operation, reduced bandwidth costs, and real-time processing...]

oneclaw> providers
Available LLM Providers:
 anthropic (claude-sonnet-4-20250514) - Active
 ollama (llama3.2:3b) - Fallback
 openai - No API key
 deepseek - No API key

oneclaw> metrics
Operational Metrics:
  Messages processed: 4
  LLM calls: 1
  Memory operations: 2
  Average LLM latency: 247ms

oneclaw> exit
Shutting down...

Command Reference

CommandDescription
statusSystem overview with metrics
healthLayer-by-layer health check
remember <text>Store in memory
recall <query>Search memory (hybrid FTS5 + vector)
ask <question>Query LLM with context
providersList LLM provider status
toolsList available tools
channelsList active channels
helpShow all commands
exitGraceful shutdown
To enable semantic memory search with embeddings:
1

Configure embedding provider

Add to your config/default.toml:
[embedding]
provider = "ollama"
model = "nomic-embed-text"
2

Install embedding model (if using Ollama)

ollama pull nomic-embed-text
3

Restart OneClaw

Memories will now be automatically embedded for semantic search using hybrid FTS5 + vector + RRF fusion.

Next Steps

Core Concepts

Learn about OneClaw’s 6-layer architecture

Configuration

Deep dive into configuration options

Deployment

Deploy to Raspberry Pi and other edge devices

API Reference

Explore the complete API documentation

Build docs developers (and LLMs) love