The Agentic Shift: Inside the Tornado - Agent Time vs Enterprise Time


Sunday 22nd March 2026

The Agentic Shift: Inside the Tornado - Agent Time vs Enterprise Time

OpenClaw is doing for agents what ChatGPT did for chat, and it’s widening the gap between “what’s possible” and what most enterprises can safely deploy.

In 20 seconds

This week’s shift: agent capability is accelerating at “consumer speed”, while most enterprises are still working out what “safe enough” actually means in production. OpenClaw is part of that acceleration, but the bigger constraint is now governance: what an agent can access, what it can do, and what evidence exists when something goes wrong.

What happened: At NVIDIA’s conference, Jensen Huang framed the next wave as agents that should be governed and secured by design, with policies that limit what they can do at any one time.

Why it matters: As agents become more capable and more widely deployed, governance becomes the limiting factor for enterprise adoption, not model performance.

The decision it forces: If customers and employees start using agents to discover, compare, and transact, you need a posture: block, offer controlled endpoints, build your own agent experience, or prepare for agent-to-agent (A2A) workflows where identity, delegation, and audit trails are explicit.

Do you want Situation Room updates delivered to your inbox?

What we’re tracking this week

Forecasts are now big enough to change roadmaps

  • The “action layer” is going mainstream, faster than most risk controls can keep up
  • Public distribution channels are already pushing back when agents feel uncontrolled
  • Early adopters are learning, in public, how quickly agents get probed, hijacked, or socially engineered

OpenClaw-style agents are going public, and platforms are reacting

As OpenClaw-style agents go mainstream, platforms like Telegram are already enforcing limits, which is your early warning that “agent distribution” will come with rules, not just reach.

Governance failures show up immediately once an agent is “live”

The moment an agent is live, it gets tested, socially engineered, and sometimes hijacked, so governance and evidence are not compliance extras, they are the product.

Anthropic is productising the always-on agent pattern OpenClaw proved “message an agent anywhere, anytime.”

Claude Code Channels removes the cost and friction that stopped people sticking with it, which means this pattern will spread fast.

Inside the tornado: OpenClaw pulls agents into the mainstream, and governance becomes the bottleneck

Jensen Huang described OpenClaw as the moment “agentic” capability moved from an enterprise topic into popular consciousness, in the same way ChatGPT pulled generative AI into the mainstream.

He framed OpenClaw as more than a chatbot, describing it as a programmable system with memory, skills, scheduling, and the ability to spawn agents and connect to external channels like WhatsApp.

He also made a clear governance point: agentic software tends to need three capabilities, access to sensitive data, the ability to execute code, and the ability to communicate externally, and the safe pattern is to avoid granting all three at the same time.

Taken together, the message is that capability is accelerating quickly, but safe deployment requires deliberate constraints and policy.

You can watch the full interview: Jensen Huang: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis (~60 mins)

What it signals in the agentic stack

OpenClaw matters because it makes the “action layer” legible. Chat systems changed how users ask. Agent systems change what software can do. OpenClaw’s design pattern, memory, tools (“skills”), scheduling, and multi-agent workflows, is effectively a reference model for how consumer-grade agents will be assembled and distributed.

For enterprises, that shifts the stack conversation away from model benchmarks and toward control surfaces: identity, permissions, audit trails, and safe tool access..

What changes for enterprises

  1. The first change is that “agent use” will appear at your edges whether you planned it or not, because customers and employees will bring agents into workflows that look like search, service, and commerce.
  2. The second change is that safety is no longer a policy document. It is a runtime control plane: what the agent can see, what it can do, what it can send, and what evidence you can produce after the fact.
  3. The third change is organisational: enterprises are still absorbing the concept, while the toolchain is evolving weekly. That gap creates brittle pilots, shadow adoption, and late-stage governance surprises.

Where it can go wrong

Agents amplify speed and scale for both legitimate work and abuse. When an agent can browse, call tools, execute code, and message externally, it becomes an attractive target for hijacking and social engineering.

The failure mode is not always catastrophic. It is often small policy violations that accumulate until a regulator, customer, or board asks for the evidence chain and you do not have it.

Practical next steps

Pick one high-value journey and define the “three capabilities” boundary: what data access is allowed, what tool execution is allowed, what outbound communication is allowed, and where you require step-up approval. Then ensure you can log and replay the evidence for any agent action that matters.

What's underneath this weeks headlines?

Manus hit Telegram’s limits, which is what “agent distribution” will look like

Manus is a good example of the new pattern: agents want always-on access to a user’s channels, and platforms will respond by enforcing rules when behaviour looks automated at scale. When Telegram suspended Manus’ ability to message, it was not a philosophical stance on agents. It was a preview of the control layer that will sit between agents and users: rate limits, permissioning, verification, and policy enforcement.

For enterprise leaders, the takeaway is straightforward. If your customer or employee experience relies on messaging channels, you need to assume that agent traffic will be treated differently to human traffic, and you will have to prove legitimacy and intent. “Distribution” in an agent world is conditional.

Aira’s “hijack” story shows how quickly agents get probed in public

Aira’s story is a neat illustration of how quickly an agent becomes a target once it is visible. As soon as an agent can take actions, people test its boundaries, try to socially engineer it, and look for ways to redirect it. This is not a niche security scenario. It is the default condition of public-facing autonomy.

The practical implication is that governance is not something you add after you ship. It is something you design for at the start: limits, step-up approvals, safe fallbacks, and evidence-grade logs. If you cannot explain what the agent is allowed to do, and prove what it actually did, you are not ready to put it in front of customers.

Anthropic just removed OpenClaw’s biggest brake

OpenClaw went viral because it made “always-on agents” feel normal: message an agent in Telegram or Discord and have it work in the background. But adoption hit an economic wall. Frontier models meant API billing, costs climbed, and many people couldn’t simply use the Claude subscription they were already paying for, so they tried it and bounced.

Anthropic’s Claude Code Channels closes that gap by bringing the same pattern into Claude Code on existing subscriptions. That is not just a feature update. It is a sign that the interaction model open source proved is now being productised by tier-one vendors, built on MCP, and likely to spread quickly. For enterprises, the takeaway is the same: capability is arriving faster than governance and control planes, and that gap is where the risk and the advantage sit.

Trusted Agents

If you’re inside the tornado right now, the hardest part is not spotting the opportunity. It’s choosing a posture that your organisation can execute safely while the technology keeps moving.

We help leaders turn this shift into a business case, fast. We map what changes first over the next 12 to 18 months. Where revenue and distribution get reshaped. Where fraud and servicing pressure shows up. Where your data and controls become the bottleneck. Then we turn that into clear bets and a sequence of pilots, so you stop reacting to demos and start steering.

And we don’t leave you with a deck. We help you ship. We bring world-class delivery partners to build a working prototype that survives the real world. Payments. Refunds. Legacy integration. Policy. Identity, controls, and data quality designed-in from day one.

Come and see us at Trusted Agents

3 Organizations making this real

  • Bast AI Building robust and responsible enterprise AI platforms, which matters when “agentic” stops being an experiment and becomes a governed production capability.
  • Vendeex Focused on creating agent-ready commerce flows, a useful lens on what it takes to move from conversational discovery to transaction and fulfilment.
  • Singulr Working on the trust layer for autonomous systems, which becomes non-negotiable once agents have tools, permissions, and the ability to act.

Disclaimer: we have no commercial interests in any of these organisations. We are tracking them because they are building parts of the infrastructure layer that will unlock agentic commerce.

Where to focus now

Two leadership questions to brief your team with:

  • Where do we allow agents today, and what would we do if that became a real channel overnight?
  • If an agent made a bad decision on behalf of a customer, could we prove what happened and defend the outcome?

I’m running a short LinkedIn poll on the four postures I’m seeing most often (block, rules-based access, build your own, prepare for A2A). Vote here and add “we need more time” in the comments if that’s the honest answer.

Trusted Agents

An advisory firm specialising in Agentic Commerce, Digital Trust and Customer Empowerment.

Read more from Trusted Agents
When the customer arrives as an agent. The story of NYC's MyCity Chatbot

Tuesday 31st March 2026 When the Customer Arrives as an Agent As AI agents begin to browse, compare, negotiate, and buy on behalf of customers, businesses need a clear way to recognise them, trust them, and set limits on what they can do When the customer arrives as an agent, businesses will need to decide whether to block, ignore, tolerate, recognise, or trust that new channel. Photo by David Hurley on Unsplash Hi, welcome to the Trusted Agents situation room. We help leaders decide where...

Sunday 29th March 2026 The Agentic Shift: From Legal Work to Legal Systems Why the next phase of legal AI is about operating model, not just productivity A new trajectory for legal services Photo by Valentin Karisch on Unsplash You’re in the Trusted Agents Situation Room. Each post gives you the signal behind the headlines and the practical implications for strategy, operations, and trust. And when you’re ready, we help you build, not just plan, with an integrated delivery crew that can...

Wide-eyed, not malicious. Just optimising. That’s how agents create operational havoc.

Tuesday 23rd March 2026 The Situation Room is brought to you by Trusted Agents Free Cancellation Was Built for Humans The Missing Layer Is Agentic Resource Planning Wide-eyed, not malicious. Just optimising. That’s how agents create operational havoc. Photo by Michelle Tresemer on Unsplash If you gave a personal travel agent permission to save you money, what would it actually do? Most people assume the polite version: it watches prices, waits patiently, and books once when the price is...