OpenClaw is the open-source AI assistant that lets users run LLM-powered agents locally for writing, coding, and file operations on-device. NemoClaw is NVIDIA's plugin on top: it installs the NVIDIA OpenShell runtime (part of the Agent Toolkit), creates a sandboxed environment, and routes every inference call through declarative policy. The result is a secure, controllable way to run OpenClaw with enterprise-grade guardrails.
NemoClaw uses a versioned blueprint to orchestrate sandbox creation, network and filesystem policy, and inference setup. The nemoclaw CLI handles the full stack — OpenShell gateway, sandbox, inference provider, and network policy — so you can deploy with one command and chat with your agent via TUI or CLI.
The platform integrates with the NVIDIA NeMo framework, Nemotron models, and NIM (NVIDIA Inference Microservices). Inference today is routed through NVIDIA cloud (e.g. nvidia/nemotron-3-super-120b-a12b); you get an API key from build.nvidia.com and supply it during nemoclaw onboard. The ecosystem supports dedicated hardware — from GeForce RTX PCs to DGX Station and DGX Spark — and continues to expand.