Deploy to a remote GPU
Run NemoClaw on a remote GPU instance through Brev. The deploy command provisions the VM, installs all prerequisites, and connects you to a running sandbox automatically.Prerequisites
- The Brev CLI installed and authenticated on your local machine.
- An NVIDIA API key from build.nvidia.com.
- NemoClaw installed locally. Follow the Quickstart install steps.
Deploy the instance
Export your API key
Set your NVIDIA API key in the environment. The deploy script forwards this to the remote VM:
Run the deploy command
Create a Brev instance and run the full NemoClaw setup:Replace
my-gpu-box with a name for your remote instance.The deploy script (scripts/brev-setup.sh) performs these steps on the VM:- Installs Node.js if not present.
- Installs Docker if not present and adds the current user to the
dockergroup. - Installs the NVIDIA Container Toolkit if a GPU is detected.
- Downloads and installs the
openshellCLI binary from the GitHub release. - Installs
cloudflaredfor the public tunnel. - Installs vLLM if a GPU is present and starts the model server.
- Runs
setup.shto create the gateway, register inference providers, and launch the sandbox.
Select a GPU type
The deploy script reads theNEMOCLAW_GPU environment variable to select the GPU configuration. The default is a2-highgpu-1g:nvidia-tesla-a100:1.
Set this variable before deploying to use a different GPU type or count:
Monitor the remote sandbox
To monitor activity and approve network requests, SSH to the instance and open the OpenShell TUI:Verify inference on the remote sandbox
Run a test agent prompt inside the remote sandbox to confirm inference is working:- Run
openclaw nemoclaw statusto confirm the active provider and endpoint. - Run
openclaw nemoclaw logs -fto view error output from the blueprint runner. - Verify the inference endpoint is reachable from the remote host.
What the bootstrap installs
| Component | Notes |
|---|---|
| Docker | Installed via apt if not present. User is added to the docker group. |
| NVIDIA Container Toolkit | Installed only if nvidia-smi is available on the VM. |
openshell CLI | Downloaded as a pre-built binary from the NVIDIA/OpenShell GitHub release. Supports x86_64 and aarch64. |
cloudflared | Installed for external tunnel access to the sandbox. |
| vLLM | Installed via pip if a GPU is present. Starts nvidia/nemotron-3-nano-30b-a3b on port 8000. |
On fresh Brev VMs, Docker’s group membership does not take effect in the current shell session. The bootstrap script uses
sg docker to work around this without requiring a re-login.Related topics
Set up the Telegram bridge
Interact with the remote agent through a Telegram bot.
Monitor sandbox activity
Use status, logs, and the TUI to inspect the remote sandbox.
Approve network requests
Handle egress approval prompts from the remote sandbox TUI.