Architecture
NemoClaw has two main components: a TypeScript plugin that integrates with the OpenClaw CLI, and a Python blueprint that orchestrates OpenShell resources.NemoClaw plugin
The plugin is a thin TypeScript package that registers commands underopenclaw nemoclaw. It runs in-process with the OpenClaw gateway and handles user-facing CLI interactions.
Plugin config schema
The plugin manifestopenclaw.plugin.json defines the configuration schema for the NemoClaw plugin:
openclaw.plugin.json
| Field | Type | Default | Description |
|---|---|---|---|
blueprintVersion | string | latest | Pinned blueprint artifact version. Omit to always use the latest release. |
blueprintRegistry | string | ghcr.io/nvidia/nemoclaw-blueprint | OCI registry or GitHub release URL for blueprint artifacts. |
sandboxName | string | openclaw | Name for the OpenClaw sandbox in OpenShell. |
inferenceProvider | string | nvidia | Default inference provider type. One of nvidia, vllm, or openai-compatible. |
NemoClaw blueprint
The blueprint is a versioned Python artifact with its own release stream. The plugin resolves, verifies, and executes the blueprint as a subprocess. The blueprint drives all interactions with the OpenShell CLI.Blueprint manifest
Theblueprint.yaml manifest declares the blueprint version, version constraints, available profiles, and component configuration:
blueprint.yaml
Blueprint lifecycle
Every launch, migrate, or apply operation follows this five-step lifecycle:Resolve
The plugin locates the blueprint artifact and checks the version against
min_openshell_version and min_openclaw_version constraints in blueprint.yaml. If blueprintVersion is latest, the most recent published artifact is fetched from the configured OCI registry (blueprintRegistry).Verify digest
The plugin checks the artifact digest against the expected value stored in
blueprint.yaml. This ensures the blueprint has not been tampered with between download and execution.Plan
The runner (
orchestrator/runner.py) determines what OpenShell resources to create or update: the gateway, inference providers, sandbox, inference route, and network policy. The plan is emitted as JSON and stored under ~/.nemoclaw/state/runs/<run-id>/plan.json.Apply
The runner executes the plan by calling
openshell CLI commands: openshell sandbox create, openshell provider create, and openshell inference set. Progress is reported over stdout as PROGRESS:<0-100>:<label> lines.Blueprint runner protocol
The TypeScript plugin communicates with the Python runner via stdout lines:| Line format | Meaning |
|---|---|
PROGRESS:<0-100>:<label> | Progress update for the current operation |
RUN_ID:<id> | Run identifier (e.g., nc-20260318-143012-a1b2c3d4) |
Exit code 0 | Success |
| Exit code non-zero | Failure |
Sandbox environment
The sandbox runs theghcr.io/nvidia/openshell-community/sandboxes/openclaw container image.
Inside the sandbox:
- OpenClaw runs with the NemoClaw plugin pre-installed.
- Inference calls are routed through the OpenShell gateway to the configured provider.
- Network egress is restricted by the baseline policy in
openclaw-sandbox.yaml. - Filesystem access is confined to
/sandboxand/tmpfor read-write, with system paths read-only. - Port
18789is forwarded from the host to support tool integrations.
When
openclaw nemoclaw status runs inside an active sandbox, it detects the sandbox context by checking for /sandbox/.openclaw or /sandbox/.nemoclaw. Host-level openshell sandbox status commands are not available from within the sandbox.Inference routing
Inference requests from the agent never leave the sandbox directly. OpenShell intercepts them and routes to the configured provider:| Provider type | Routing target |
|---|---|
nvidia (default) | https://integrate.api.nvidia.com/v1 |
ncp | NCP partner endpoint (configurable) |
nim-local | http://nim-service.local:8000/v1 |
vllm | http://localhost:8000/v1 (via host gateway) |
ollama | http://host.openshell.internal:11434/v1 |