A panel of AI industry leaders at GTC 2026 in San Jose laid out the specific security and governance barriers preventing enterprises from deploying OpenClaw agents at scale. The discussion, reported in detail by Computer Weekly, featured Nvidia CEO Jensen Huang alongside the CEOs of Mistral AI, LangChain, Perplexity, OpenEvidence, and a senior director from the Allen Institute for AI.
The core takeaway: setting up OpenClaw on a single machine is trivial. Making it enterprise-safe requires solving problems that don’t have off-the-shelf solutions yet.
Huang’s Two-of-Three Rule
Huang distilled the enterprise agent security problem into three capabilities: accessing sensitive information, executing code, and communicating with the outside world.
“If we want to be secure as an enterprise, you should allow someone, including an AI, any two of those three things at one time, but not all at one time, unless it’s the CEO,” Huang said during the panel.
The framework is simple enough to implement as a policy layer. An agent processing internal financial data and writing code can do so safely as long as it has no outbound communication channel. An agent sending emails based on user instructions can do so as long as it cannot access confidential databases. The risk compounds only when all three capabilities are active simultaneously, because a compromised or manipulated agent could exfiltrate data, write malicious code, and transmit it externally in a single chain.
Huang separately described OpenClaw as needing “enterprise-grade layers, including privacy, governance, security, and optimized runtimes,” according to NVIDIA’s own GTC 2026 live coverage.
Mistral’s Governance Warning
Arthur Mensch, co-founder and CEO of Mistral AI, was more direct about the gap between OpenClaw’s current state and what enterprises need.
“You need primitives to have the right governance and scalability and to host everything in the same control plane,” Mensch told the panel. “That’s actually harder to do than just buying a computer and setting up OpenClaw.”
The word “primitives” is the key. Mensch is pointing to the absence of foundational building blocks: identity management for agents, audit trails for agent actions, role-based access controls that work at the agent level, and centralized control planes that IT teams can actually monitor. OpenClaw’s architecture gives agents access to file systems, APIs, and external tools, but the governance layer — who approved this action, what data was touched, can we roll it back — is left to the deploying organization to build.
Harness Engineering as the Practical Fix
Harrison Chase, co-founder and CEO of LangChain, proposed a specific engineering approach: harness engineering, which he defines as building guardrails, tools, and structured execution flows around the core model powering an agent.
Rather than relying on the model itself to behave correctly, harness engineering constrains what the model can do. As Chase explained in a VentureBeat interview, the approach involves designing systems that “mold the inherently spiky intelligence of a model for tasks we care about” through controlled tool access, structured prompts, and execution flow constraints.
LangChain’s own implementation, Deep Agents, demonstrated the concept’s viability: by changing only the harness around a fixed model (GPT-5.2-codex), the team improved its Terminal Bench 2.0 coding benchmark score from 52.8 to 66.5, according to the LangChain engineering blog. The same principle applies to enterprise security: rather than trusting an agent to stay within bounds, the harness enforces those bounds architecturally.
The Open vs. Closed Model Debate
The panel also split on whether enterprises should deploy open-source or proprietary models inside their agent stacks.
Hanna Hajishirzi, senior director at the Allen Institute for AI, made a privacy argument: “I feel more comfortable letting an open model access my private data,” she told the panel. With an open model, the weights run locally, and sensitive data never leaves the organization’s infrastructure.
Daniel Nadler, CEO of healthcare AI firm OpenEvidence, made the specialization argument. General-purpose models are too broad for domains like medical claims processing, where agents need training specifically on insurance procedures and clinical documentation. “You can’t start with these 800-year-old models that are set to think about pattern recognition in a certain way. You actually want to train in the tails,” Nadler said, referring to the long tail of specialized use cases where fine-tuned open models outperform generalists.
Huang pushed back on the binary framing entirely: “Even for a closed-model company, I believe open models will be used as part of the agentic system where the closed model is your crown jewel.” Perplexity CEO Aravind Srinivas agreed, predicting enterprises will use proprietary models as reasoning engines and open models for formatting, routing, and tool-use tasks. “Models are essentially becoming just tools, like file systems and connectors,” Srinivas said.
What This Means for Builders
The panel consensus is clear: OpenClaw’s consumer-grade viral adoption has outrun enterprise infrastructure. The agents work. The governance, security, and specialization layers do not exist at the level enterprises require.
Nvidia is addressing part of this with NemoClaw, its enterprise-grade agent platform that combines OpenClaw’s agent-building framework with Nvidia’s security and privacy stack. LangChain is tackling it through harness engineering. Mistral is flagging the primitives gap. But no single vendor has a complete solution yet, and organizations deploying OpenClaw agents into production environments today are building those governance layers themselves.