Skip to main content

Architecture

NemoClaw has two main components: a TypeScript plugin that integrates with the OpenClaw CLI, and a Python blueprint that orchestrates OpenShell resources.

NemoClaw plugin

The plugin is a thin TypeScript package that registers commands under openclaw nemoclaw. It runs in-process with the OpenClaw gateway and handles user-facing CLI interactions.
nemoclaw/
├── src/
│   ├── index.ts                    Plugin entry — registers all commands
│   ├── cli.ts                      Commander.js subcommand wiring
│   ├── commands/
│   │   ├── launch.ts               Fresh install into OpenShell
│   │   ├── connect.ts              Interactive shell into sandbox
│   │   ├── status.ts               Blueprint run state + sandbox health
│   │   ├── logs.ts                 Stream blueprint and sandbox logs
│   │   ├── migrate.ts              Migrate host OpenClaw into sandbox
│   │   ├── eject.ts                Rollback from OpenShell to host
│   │   ├── onboard.ts              Inference endpoint setup wizard
│   │   └── slash.ts                /nemoclaw chat command handler
│   ├── blueprint/
│   │   ├── resolve.ts              Version resolution, cache management
│   │   ├── fetch.ts                Download blueprint from OCI registry
│   │   ├── verify.ts               Digest verification, compatibility checks
│   │   ├── exec.ts                 Subprocess execution of blueprint runner
│   │   └── state.ts                Persistent state (run IDs)
│   └── onboard/
│       ├── config.ts               Onboard config load/save
│       ├── prompt.ts               Interactive prompts
│       └── validate.ts             API key validation
├── openclaw.plugin.json            Plugin manifest
└── package.json                    Commands declared under openclaw.extensions

Plugin config schema

The plugin manifest openclaw.plugin.json defines the configuration schema for the NemoClaw plugin:
openclaw.plugin.json
{
  "id": "nemoclaw",
  "name": "NemoClaw",
  "version": "0.1.0",
  "description": "Migrate and run OpenClaw inside OpenShell with optional NIM-backed inference",
  "configSchema": {
    "type": "object",
    "properties": {
      "blueprintVersion": {
        "type": "string",
        "description": "Pinned blueprint artifact version (e.g., '0.1.0'). Omit for latest.",
        "default": "latest"
      },
      "blueprintRegistry": {
        "type": "string",
        "description": "OCI registry or GitHub release URL for blueprint artifacts",
        "default": "ghcr.io/nvidia/nemoclaw-blueprint"
      },
      "sandboxName": {
        "type": "string",
        "description": "Name for the OpenClaw sandbox in OpenShell",
        "default": "openclaw"
      },
      "inferenceProvider": {
        "type": "string",
        "description": "Default inference provider type (nvidia, vllm, openai-compatible)",
        "default": "nvidia"
      }
    }
  }
}
FieldTypeDefaultDescription
blueprintVersionstringlatestPinned blueprint artifact version. Omit to always use the latest release.
blueprintRegistrystringghcr.io/nvidia/nemoclaw-blueprintOCI registry or GitHub release URL for blueprint artifacts.
sandboxNamestringopenclawName for the OpenClaw sandbox in OpenShell.
inferenceProviderstringnvidiaDefault inference provider type. One of nvidia, vllm, or openai-compatible.

NemoClaw blueprint

The blueprint is a versioned Python artifact with its own release stream. The plugin resolves, verifies, and executes the blueprint as a subprocess. The blueprint drives all interactions with the OpenShell CLI.
nemoclaw-blueprint/
├── blueprint.yaml                  Manifest — version, profiles, compatibility
├── orchestrator/
│   ├── __init__.py
│   └── runner.py                   CLI runner — plan / apply / status / rollback
├── policies/
│   └── openclaw-sandbox.yaml       Strict baseline network + filesystem policy
├── migrations/
│   └── snapshot.py                 Migration snapshot utilities
├── Makefile
└── pyproject.toml

Blueprint manifest

The blueprint.yaml manifest declares the blueprint version, version constraints, available profiles, and component configuration:
blueprint.yaml
version: "0.1.0"
min_openshell_version: "0.1.0"
min_openclaw_version: "2026.3.0"
digest: ""  # Computed at release time

profiles:
  - default
  - ncp
  - nim-local
  - vllm

components:
  sandbox:
    image: "ghcr.io/nvidia/openshell-community/sandboxes/openclaw:latest"
    name: "openclaw"
    forward_ports:
      - 18789

  inference:
    profiles:
      default:
        provider_type: "nvidia"
        provider_name: "nvidia-inference"
        endpoint: "https://integrate.api.nvidia.com/v1"
        model: "nvidia/nemotron-3-super-120b-a12b"
      ncp:
        provider_type: "nvidia"
        provider_name: "nvidia-ncp"
        endpoint: ""
        model: "nvidia/nemotron-3-super-120b-a12b"
        credential_env: "NVIDIA_API_KEY"
        dynamic_endpoint: true
      nim-local:
        provider_type: "openai"
        provider_name: "nim-local"
        endpoint: "http://nim-service.local:8000/v1"
        model: "nvidia/nemotron-3-super-120b-a12b"
        credential_env: "NIM_API_KEY"
      vllm:
        provider_type: "openai"
        provider_name: "vllm-local"
        endpoint: "http://localhost:8000/v1"
        model: "nvidia/nemotron-3-nano-30b-a3b"
        credential_env: "OPENAI_API_KEY"
        credential_default: "dummy"

Blueprint lifecycle

Every launch, migrate, or apply operation follows this five-step lifecycle:
1

Resolve

The plugin locates the blueprint artifact and checks the version against min_openshell_version and min_openclaw_version constraints in blueprint.yaml. If blueprintVersion is latest, the most recent published artifact is fetched from the configured OCI registry (blueprintRegistry).
2

Verify digest

The plugin checks the artifact digest against the expected value stored in blueprint.yaml. This ensures the blueprint has not been tampered with between download and execution.
3

Plan

The runner (orchestrator/runner.py) determines what OpenShell resources to create or update: the gateway, inference providers, sandbox, inference route, and network policy. The plan is emitted as JSON and stored under ~/.nemoclaw/state/runs/<run-id>/plan.json.
4

Apply

The runner executes the plan by calling openshell CLI commands: openshell sandbox create, openshell provider create, and openshell inference set. Progress is reported over stdout as PROGRESS:<0-100>:<label> lines.
5

Status

The runner reports current state from the persisted plan. The run identifier (e.g., nc-20260318-143012-a1b2c3d4) is emitted as a RUN_ID: line and stored in the plugin’s local state for future reference by logs and eject.

Blueprint runner protocol

The TypeScript plugin communicates with the Python runner via stdout lines:
Line formatMeaning
PROGRESS:<0-100>:<label>Progress update for the current operation
RUN_ID:<id>Run identifier (e.g., nc-20260318-143012-a1b2c3d4)
Exit code 0Success
Exit code non-zeroFailure

Sandbox environment

The sandbox runs the ghcr.io/nvidia/openshell-community/sandboxes/openclaw container image. Inside the sandbox:
  • OpenClaw runs with the NemoClaw plugin pre-installed.
  • Inference calls are routed through the OpenShell gateway to the configured provider.
  • Network egress is restricted by the baseline policy in openclaw-sandbox.yaml.
  • Filesystem access is confined to /sandbox and /tmp for read-write, with system paths read-only.
  • Port 18789 is forwarded from the host to support tool integrations.
When openclaw nemoclaw status runs inside an active sandbox, it detects the sandbox context by checking for /sandbox/.openclaw or /sandbox/.nemoclaw. Host-level openshell sandbox status commands are not available from within the sandbox.

Inference routing

Inference requests from the agent never leave the sandbox directly. OpenShell intercepts them and routes to the configured provider:
Agent (sandbox)  ──▶  OpenShell gateway  ──▶  NVIDIA cloud (integrate.api.nvidia.com)
For other provider types, the routing target changes but the path through the OpenShell gateway remains the same:
Provider typeRouting target
nvidia (default)https://integrate.api.nvidia.com/v1
ncpNCP partner endpoint (configurable)
nim-localhttp://nim-service.local:8000/v1
vllmhttp://localhost:8000/v1 (via host gateway)
ollamahttp://host.openshell.internal:11434/v1
See Inference Profiles for provider configuration details.

Build docs developers (and LLMs) love