The Agentic Shift: Enterprise time vs agent time


In 20 seconds

This week’s Agentic Shift: We saw three versions of the same story this week: agentic products moving at startup speed, enterprise platforms tightening control, and security teams discovering that “agents everywhere” creates a new class of credential and fraud risk.
Why it matters: If your customers start arriving as agents, your biggest risk is not capability, it is cycle-time mismatch: you will be safe, correct, and late.
The decision it forces: Do you block agents, entertain them via a controlled endpoint, build your own agent, or prepare for agent-to-agent workflows with identity, permissions, and evidence?

What we’re tracking this week

Cycle time mismatch is now a competitive threat (the disruptor is already live, and learning loops are weeks not quarters)

Agentic infrastructure is arriving before it is safe (OpenClaw and Moltbook show what happens when autonomy meets weak security hygiene)

Sunday 22nd Feb 2026

Enterprise time vs agent time

The next disruptor is already shipping at consumer speed, while most incumbents are still deciding where “safe enough” begins.

Enterprise speed vs agent speed is the new strategic gap

The simplest way to describe what is changing is this: enterprises still behave like builders of infrastructure, while the agentic market is behaving like a meme. That is not a value judgement. It is a cycle-time reality.

In travel and financial services, the enterprise instincts are usually right. You have liability. You have fraud. You have regulators. You have customers who do not forgive outages and who do not care that a workflow was “experimental”. So you do architecture reviews, gated releases, and controlled partner access. That is what enterprise-class means in high-stakes systems.

The problem is that agentic adoption is being pulled forward by consumers and developers, not by enterprise planning. When people get used to expressing intent in chat, and when open-source agent frameworks let anyone wire tools together in a weekend, the market learns at a tempo that quarterly roadmaps struggle to match. That is how the next competitive interface gets normalised.

Travel is showing both sides of this. On one side, Sabre is publishing a trust-first framework for agentic AI, explicitly treating governance, identity, and observability as prerequisites for autonomy, not a clean-up exercise.

On the other, Amadeus is shutting down its self-service APIs portal, a move that illustrates the defensive posture many incumbents will take when they feel uncontrolled innovation at their edges.

Martijn van der Voort adds the sharper strategic point: “Agents do not route based on legacy gravity. They route where execution is permitted…” and “Conditional access is not a structural moat.” In other words, gating access can reduce risk, but it does not stop re-routing to wherever execution is allowed and commercially sensible.

This is the moment when the tuktuk has found itself on the raceway. Your organisation can be exemplary at enterprise delivery and still lose distribution, because the conversation moves somewhere else, and the new intermediaries learn faster than you do.

What does the agentic stack signal here? It is splitting into layers:

  • Cognition (LLMs and reasoning)
  • Delegation (agents with tools, memory, and goals)
  • Trust rails (identity, permissions, terms, payments, audit, and dispute-grade evidence)

Most enterprises are investing in the first two because they are visible. The winners will invest in the third because that is what makes autonomy governable.

Where it can go wrong is predictable: when you connect “reasoning” directly to “execution”, you amplify both speed and blast radius. Fraud loops close faster. Refund abuse scales. Credential theft becomes more valuable because a stolen key is no longer just data access, it is delegated action. And when something goes wrong, your customers and regulators will ask for receipts: what did the system see, what authority did it have, what rules were enforced, and who stopped it.

Practical next step: build a two-speed operating model. Run pilots at agent speed in a sandbox, but only allow production autonomy through deterministic controls: least privilege, step-up approvals for high-risk actions, and logs you could actually use in a dispute.

Read more:

Two stories that sharpen the lesson

Banking has the same cycle-time problem, and MrBeast just made it obvious

Banking leaders often assume disruption arrives through fintechs that look like banks, talk like banks, and raise money like banks. The MrBeast move is a reminder that the next competitor may not start with a banking product at all. It may start with distribution.

MrBeast’s company has acquired Step, a youth-focused fintech app. In the announcement coverage, Jimmy Donaldson framed it as closing the financial literacy gap, saying: “Nobody taught me about investing, building credit or managing money when I was growing up… I want to give millions of young people the financial foundation I never had.” MrBeast acquires youth-focused fintech Step Feb 11 2026

You can read this as celebrity endorsement. Or you can read it as a distribution strategy landing in a regulated category. Step reportedly has millions of users, and MrBeast has an audience that overlaps perfectly with the “future primary bank” demographic. MrBeast just bought a banking app Feb 9 2026

Now connect that to agentic finance. In the MyCFO framing, the personal agent does not “shop for products”. It orchestrates tasks across providers: credit becomes a momentary lending action, deposits become liquidity routing, and investing becomes continuous optimisation against constraints. When that happens, banks compete more like callable utilities, and the party that owns intent and context owns the relationship. My CFO and the Rise of Agentic Finance May 2025

This is where cycle time becomes strategic. If your customer’s expectations shift in weekends, and your compliance and product cycles shift in quarters, you need a bridge: controlled experimentation with clear guardrails, not a multi-year programme that assumes the interface stays stable.

The other uncomfortable overlap is age and authority. Youth finance, agentic delegation, and regulation are about to collide. If a minor can instruct an agent to do something they cannot legally do, the liability will not stay with the AI platform. It will land on the institution that executed the transaction. Dazza Greenwood makes the underlying requirement explicit in his AI Agent ID write-up, pointing to the OpenID Foundation whitepaper’s warning that safe deployment depends on “determining who they act on behalf of, how they are authenticated, monitored, audited, governed”. That is the core of the problem: if you cannot reliably establish who an agent is acting for, and what authority it is carrying, you cannot safely let it take binding actions in commerce or finance. We Must Solve Agent Identity. This New Industry Whitepaper is a Starting Point. DazzaGreenwood’s Weblog 05 Nov 2025


OpenClaw and Moltbook are the warning sign for agentic infrastructure

OpenClaw is useful precisely because it makes the seams visible. It is not “AI in a chat window”. It is software that can take actions, connect to tools, and run tasks in a loop. That is why it has spread so fast, and why security teams are reacting so sharply.

Wired reports that major tech firms have restricted or banned OpenClaw internally over concerns about unpredictability, privacy breach risk, and how easily a tool like this could be manipulated through prompt injection or malicious inputs. Meta and Other Tech Firms Put Restrictions on Use of OpenClaw Over Security Fears - Wired Feb 17 .

Kaspersky’s write-up is even more blunt from an enterprise risk perspective. It highlights a cluster of issues that are depressingly familiar to anyone who has lived through shadow IT: insecure defaults, disclosed vulnerabilities, and secrets stored in plain text. It also notes that in a short period “secrets were leaked from Moltbook”, which it describes as essentially “Reddit for bots”. OpenClaw threats: assessing the risks, and how to handle shadow AI - Kaspersky Daily Feb 16 2026.

Then you have the credential leak problem in the wild. Sean Blanchfield claims “135,000+ openclaws exposed right now, and keys being stolen in real-time”, and positions Jentic as a mitigation layer so an agent “gets access but it doesn’t get the keys.” What a week: We've inadvertently solved the OpenClaw credential leak problem. Sean Blanchfield on LinkedIN Feb 20 2026

Whether you buy that specific remedy or not, the pattern is the point. When the interface shifts from humans to agents, credentials become action. Tokens become authority. Misconfiguration becomes money.

For travel leaders, Michael J. Goldrich’s hotel booking experiment is a brilliant illustration of what is happening on the customer side. A personal agent can run all day, learn preferences, and start to make recommendations before the user asks, which changes discovery and distribution. But the moment it hits stale data, weak interoperability, or fragile controls, it falls back to guesswork or to a legacy “punch-out” flow. AI Agents Hotel Booking: The Future of Hospitality Discovery Vivander Advisors Feb 17 2026

So yes, these tools are getting traction. But no, they are not ready to be treated as enterprise-grade infrastructure unless you are willing to risk business continuity. The right posture is to observe, sandbox, and learn, while you build the rails that make agent access safe.

An offer from Jamie and Gam (Trusted Agents)

2 hour Executive Briefing

A calm situation report and a practical decision framework for what to do this quarter, not in 2030

If you are feeling slightly overwhelmed by how much is happening week to week, you are not alone. We felt it too, until we stopped trying to track everything and instead built a repeatable way to separate signal from noise, using cross-functional and cross-industry research.

Trusted Agents runs short, sharp executive briefings that give leadership teams a clear view of what has changed, what assumptions are now risky, and which pilots can be run safely without betting the company. The goal is confidence, not chaos.

Sign up at The Trusted Agents

3 Companies to Watch

Disclaimer: we have no commercial interests in any of these organisations. We are tracking them because they are building parts of the infrastructure layer that will unlock agentic commerce.

  • Murfee AI
    Deterministic “airlocks” and auditability for agent-to-rail execution in high-liability travel workflows.
  • Fabric (OnFabric)
    A serious view on context portability, which becomes the moat when intelligence gets cheap.
  • AdyenA merchant-first approach to agentic commerce that prioritises control, flexibility, and avoiding early lock-in.

Questions to take into Monday

If you want one mental model to carry into next week, it is this: trusting an agent requires three things that most stacks still treat as optional. Identity (who is acting), delegation (what they are allowed to do), and context (what they can see, and under what terms).

Three actions to take now:

  1. Identify the top three workflows where “agent speed” would amplify fraud or operational failure (refunds, changes, cancellations, dispute handling).
  2. Define where human-in-the-loop is non-negotiable, and what triggers step-up verification.
  3. Build an agent-access policy that treats tokens and API keys as delegated authority, not just integration plumbing.
  4. Create a sandbox where you can test agentic experiences without giving them production blast radius.
  5. Decide your stance: block, controlled endpoint, build your own agent, or A2A readiness, and communicate it internally.

Two leadership questions worth debating:

  • If an agent makes the wrong move at scale, can we prove what it saw and stop it fast?
  • Which part of our customer experience becomes invisible if discovery moves into agent interfaces?

Trusted Agents

An advisory firm specialising in Agentic Commerce, Digital Trust and Customer Empowerment.

Read more from Trusted Agents
When the customer arrives as an agent. The story of NYC's MyCity Chatbot

Tuesday 31st March 2026 When the Customer Arrives as an Agent As AI agents begin to browse, compare, negotiate, and buy on behalf of customers, businesses need a clear way to recognise them, trust them, and set limits on what they can do When the customer arrives as an agent, businesses will need to decide whether to block, ignore, tolerate, recognise, or trust that new channel. Photo by David Hurley on Unsplash Hi, welcome to the Trusted Agents situation room. We help leaders decide where...

Sunday 29th March 2026 The Agentic Shift: From Legal Work to Legal Systems Why the next phase of legal AI is about operating model, not just productivity A new trajectory for legal services Photo by Valentin Karisch on Unsplash You’re in the Trusted Agents Situation Room. Each post gives you the signal behind the headlines and the practical implications for strategy, operations, and trust. And when you’re ready, we help you build, not just plan, with an integrated delivery crew that can...

Wide-eyed, not malicious. Just optimising. That’s how agents create operational havoc.

Tuesday 23rd March 2026 The Situation Room is brought to you by Trusted Agents Free Cancellation Was Built for Humans The Missing Layer Is Agentic Resource Planning Wide-eyed, not malicious. Just optimising. That’s how agents create operational havoc. Photo by Michelle Tresemer on Unsplash If you gave a personal travel agent permission to save you money, what would it actually do? Most people assume the polite version: it watches prices, waits patiently, and books once when the price is...