Skip to main content

What is nanobot?

nanobot is an ultra-lightweight personal AI assistant framework that delivers core agent functionality with 99% fewer lines of code than traditional agent frameworks like OpenClaw. With just ~4,000 lines of core agent code, nanobot proves that powerful AI assistants don’t need to be complex. It’s designed for researchers, developers, and AI enthusiasts who want a clean, readable codebase that’s easy to understand, modify, and extend.

Quick Start

Get up and running with nanobot in under 2 minutes

Why nanobot?

Traditional AI agent frameworks often come with tens of thousands of lines of code, making them difficult to understand, debug, and customize. nanobot takes a different approach:

Ultra-Lightweight

Just ~4,000 lines of core agent code — 99% smaller than comparable frameworks. Run bash core_agent_lines.sh in the repo to verify anytime.

Research-Ready

Clean, readable code that’s easy to understand, modify, and extend. Perfect for academic research and experimentation.

Lightning Fast

Minimal footprint means faster startup, lower resource usage, and quicker iterations during development.

Production-Ready

Despite its simplicity, nanobot is battle-tested with 24/7 deployments across multiple chat platforms.

Key Differentiators

Minimalist Philosophy

Every line of code in nanobot serves a purpose. No bloat, no unnecessary abstractions — just the essential components needed for a powerful AI assistant:
  • Core agent loop: LLM interactions and tool execution
  • Built-in tools: File system, shell, web search, scheduling
  • Memory system: Persistent context across conversations
  • Multi-channel support: Telegram, Discord, WhatsApp, Slack, and more
  • MCP integration: Connect external tool servers seamlessly

Developer Experience

nanobot is built with developers in mind:
# Adding a new LLM provider takes just 2 steps:
# 1. Add to registry (nanobot/providers/registry.py)
ProviderSpec(
    name="myprovider",
    keywords=("myprovider", "mymodel"),
    env_key="MYPROVIDER_API_KEY",
    litellm_prefix="myprovider",
)

# 2. Add config field (nanobot/config/schema.py)
class ProvidersConfig(BaseModel):
    myprovider: ProviderConfig = ProviderConfig()
That’s it! No complex registration systems or scattered configuration files.

Real-World Capabilities

Despite its compact size, nanobot handles real production workloads:

24/7 Market Analysis

Real-time data gathering, analysis, and trend detection for financial markets and news.

Full-Stack Development

Code generation, debugging, deployment, and maintenance across multiple languages and frameworks.

Smart Scheduling

Natural language task scheduling with cron integration and proactive reminders.

Knowledge Management

Persistent memory system that learns from conversations and recalls context across sessions.

Architecture Overview

nanobot’s architecture is intentionally simple, with clear separation of concerns:
nanobot/
├── agent/          # Core agent logic
│   ├── loop.py     # LLM ↔ tool execution loop
│   ├── context.py  # Prompt builder with skills & memory
│   ├── memory.py   # Persistent conversation memory
│   ├── skills.py   # Dynamic skill loading system
│   ├── subagent.py # Background task execution
│   └── tools/      # Built-in tools (filesystem, shell, web, etc.)
├── channels/       # Chat platform integrations
│   ├── telegram.py # Telegram bot
│   ├── discord.py  # Discord bot
│   ├── whatsapp.py # WhatsApp integration
│   └── ...         # Slack, Email, Feishu, etc.
├── providers/      # LLM provider integrations
│   ├── registry.py # Single source of truth for providers
│   └── ...         # OpenRouter, Anthropic, OpenAI, etc.
├── bus/            # Message routing between channels and agent
├── cron/           # Scheduled task execution
├── heartbeat/      # Proactive periodic tasks
└── config/         # Configuration management

How It Works

1

Message Routing

User messages from any channel (Telegram, CLI, Discord, etc.) are routed through the message bus to the agent.
2

Context Building

The context builder assembles the prompt with:
  • Conversation history (sliding window)
  • Persistent memory (MEMORY.md)
  • Relevant skills (injected from skills/ directory)
  • Available tools (filesystem, shell, web, MCP servers)
3

LLM Processing

The agent sends the context to the configured LLM provider (OpenRouter, Anthropic, OpenAI, etc.) and receives a response with potential tool calls.
4

Tool Execution

If the LLM requests tool calls, the agent executes them and feeds results back for up to max_iterations rounds (default: 40).
5

Response Delivery

The final response is routed back through the message bus to the original channel and delivered to the user.
This simple loop handles everything from basic Q&A to complex multi-step tasks like “deploy my app to production” or “analyze the latest crypto trends and send me a report every morning.”

Use Cases

nanobot excels at being your personal AI assistant across multiple domains:

Research & Development

  • Code exploration: Understand new codebases quickly
  • Experimentation: Test new agent architectures and tools
  • Prototyping: Build custom AI assistants for specific tasks

Personal Productivity

  • Task automation: Schedule recurring tasks with natural language
  • Information gathering: Web search, summarization, and analysis
  • Multi-platform access: Same assistant across Telegram, Discord, CLI, etc.

Team Collaboration

  • Shared knowledge base: Team memory accessible via chat
  • DevOps automation: Deploy, monitor, and maintain services
  • Agent social networks: Connect to Moltbook, ClawdChat, and other agent communities
nanobot’s minimal codebase makes it ideal for academic papers, tutorials, and educational content about AI agents.

What’s Next?

Installation

Install nanobot with pip, uv, or from source

Quick Start

Get your first conversation running in 2 minutes

Configuration

Configure providers, channels, and tools

GitHub Repository

Star the repo and contribute to the project

License: nanobot is MIT licensed and free for educational, research, and technical exchange purposes.

Build docs developers (and LLMs) love