Jensen Huang introduced OpenClaw at GTC, baked into Nvidia’s platform. It breaks problems into tasks, spawns sub-agents, connects them to tools, to file systems, to models. The infrastructure for agents to actually coordinate work, not just answer questions in isolation.
But he was just as clear about the other half of the equation.
Agents are probabilistic by nature. They drift. They hallucinate. The only thing that anchors them is a structured, deterministic layer underneath — the ground truth that tells the agent not just what something is, but how it relates to everything else.
This is the necessary step to start defining institutional knowledge. Your policies. Your workflows. The expertise that currently lives in people’s heads, in email threads, in the memory of whoever has been on the project longest.
That’s the gap our industry needs to work on.
The blocks to coordinate agents across that data are starting to be ready. The tools to embed what your firm knows into agents’ behaviour are already here.
Now comes the hard work of capturing the relationships, structuring the knowledge, and building the layer that makes decades of engineering expertise readable to the agents waiting to use it.