Skip to main content
Follow these steps to go from zero to a running sandboxed OpenClaw agent. The installer handles Node.js and NemoClaw setup, then walks you through the guided onboard wizard.
NemoClaw is alpha software. Interfaces, APIs, and behavior may change without notice as the project evolves. NemoClaw currently requires a fresh installation of OpenClaw.

Prerequisites

Verify that your environment meets the hardware and software requirements before you begin.

Hardware

ResourceMinimumRecommended
CPU4 vCPU4+ vCPU
RAM8 GB16 GB
Disk20 GB free40 GB free
The sandbox image is approximately 2.4 GB compressed. During image push, the Docker daemon, k3s, and the OpenShell gateway run alongside the export pipeline, which buffers decompressed layers in memory. On machines with less than 8 GB of RAM this combined usage can trigger the OOM killer. If you cannot add memory, configuring at least 8 GB of swap can work around the issue at the cost of slower performance.

Software

DependencyVersion requirement
LinuxUbuntu 22.04 LTS or later
Node.js20 or later (22 recommended)
npm10 or later
DockerInstalled and running
OpenShellInstalled
Install OpenShell before running the NemoClaw installer. The installer will install Node.js automatically if it is not present, but OpenShell must already be available.

Install and onboard

1

Run the installer

Download and execute the NemoClaw installer script. The script installs Node.js via nvm if it is not already present, installs the nemoclaw CLI, and then launches the interactive onboard wizard.
curl -fsSL https://nvidia.com/nemoclaw.sh | bash
The installer runs through the following stages:
[INFO]  === NemoClaw Installer ===
[INFO]  Node.js found: v22.x.x
[INFO]  Runtime OK: Node.js v22.x.x, npm 10.x.x
[INFO]  Installing NemoClaw from npm…
[INFO]  Verified: nemoclaw is available at /usr/local/bin/nemoclaw
[INFO]  Running nemoclaw onboard…
If Node.js was installed via nvm, the installer will print instructions to reload your shell profile before nemoclaw is on your PATH. Follow those instructions or open a new terminal before continuing.
2

Complete the onboard wizard

After installation, the onboard wizard starts automatically. It configures the inference endpoint, API credential, and model for your sandbox.Step 1 — Select your inference endpoint:
NemoClaw Onboarding
-------------------
? Select your inference endpoint:
> NVIDIA Build (build.nvidia.com)   recommended — zero infra, free credits
  NVIDIA Cloud Partner (NCP)        dedicated capacity, SLA-backed
Select NVIDIA Build to get started immediately using free credits from build.nvidia.com.Step 2 — Enter your NVIDIA API key:
Get an API key from: https://build.nvidia.com/settings/api-keys
? Enter your NVIDIA API key: **********************
Get your API key from build.nvidia.com/settings/api-keys.Step 3 — Select a model:
? Select your primary model:
> Nemotron 3 Super 120B (nvidia/nemotron-3-super-120b-a12b)
  Nemotron Ultra 253B (nvidia/llama-3.1-nemotron-ultra-253b-v1)
  Nemotron Super 49B v1.5 (nvidia/llama-3.3-nemotron-super-49b-v1.5)
  Nemotron 3 Nano 30B (nvidia/nemotron-3-nano-30b-a3b)
Step 4 — Review and confirm:
Configuration summary:
  Endpoint:    build (https://integrate.api.nvidia.com/v1)
  Model:       nvidia/nemotron-3-super-120b-a12b
  API Key:     nvapi-****...****
  Credential:  $NVIDIA_API_KEY
  Profile:     default
  Provider:    nvidia-nim

? Apply this configuration? Yes
3

Verify installation

When the wizard completes, the output confirms the running environment:
──────────────────────────────────────────────────
Sandbox      my-assistant (Landlock + seccomp + netns)
Model        nvidia/nemotron-3-super-120b-a12b (NVIDIA Cloud API)
──────────────────────────────────────────────────
Run:         nemoclaw my-assistant connect
Status:      nemoclaw my-assistant status
Logs:        nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────

[INFO]  === Installation complete ===
The sandbox is now running with Landlock, seccomp, and network namespace isolation active.

Connect to the sandbox

1

Open a shell in the sandbox

Run the following command from your host to open an interactive shell session inside the sandbox:
nemoclaw my-assistant connect
You will see the connection banner and be dropped into the sandbox shell:
Connecting to OpenClaw sandbox: my-assistant
You will be inside the sandbox. Run 'openclaw' commands normally.
Type 'exit' to return to your host shell.

sandbox@my-assistant:~$
2

Send a test message using the OpenClaw TUI

The OpenClaw TUI opens an interactive chat interface. From inside the sandbox, run:
openclaw tui
Type a message and press Enter to send it to the agent. The TUI also displays network egress requests in a side panel — any attempt to reach an unlisted host will appear here for your approval.
3

Send a test message using the OpenClaw CLI

Use the OpenClaw CLI to send a single message and print the agent’s response without entering the TUI:
openclaw agent --agent main --local -m "hello" --session-id test
FlagDescriptionDefault
--agentThe agent configuration to use.
--localRun against the local sandbox rather than a remote host.false
-mThe message to send to the agent.
--session-idSession identifier for conversation continuity.
If the agent replies, your sandbox is working correctly. Type exit to return to your host shell.

Non-interactive onboarding

You can skip the interactive prompts by passing all required flags to nemoclaw onboard directly. This is useful for automated provisioning or CI environments.
nemoclaw onboard \
  --endpoint build \
  --api-key "$NVIDIA_API_KEY" \
  --model nvidia/nemotron-3-super-120b-a12b
FlagDescription
--api-keyAPI key for endpoints that require one. Skips the interactive key prompt.
--endpointEndpoint type: build, ncp, nim-local, vllm, ollama, custom. Default: interactive prompt.
--ncp-partnerNCP partner name. Required when --endpoint ncp.
--endpoint-urlEndpoint URL. Required for ncp, nim-local, ollama, and custom.
--modelModel ID to use. Skips the interactive model selection prompt.
The nim-local, vllm, ollama, and custom endpoint types are experimental. Use --endpoint build or --endpoint ncp for production setups.

Next steps

How It Works

Understand the plugin, blueprint, and sandbox lifecycle before customizing your setup.

Switch inference providers

Switch to a different Nemotron model or configure an NCP endpoint.

Approve network requests

Review and approve agent egress requests surfaced in the OpenShell TUI.

Customize network policy

Pre-approve trusted domains to avoid manual approval at runtime.

Deploy to a remote GPU

Deploy your sandbox to a remote GPU instance for always-on operation.

Monitor sandbox activity

Track agent behavior, network egress, and inference calls through the OpenShell TUI.

Build docs developers (and LLMs) love