The Agentic Shift: The Checkout Page Is Gone. Nobody Built What Replaces It.


Sunday10th May 2026

The Checkout Page Is Gone. Nobody Built What Replaces It.

Mastercard and Google just open-sourced the first serious answer to a governance gap that is already generating chargebacks, disputes, and liability exposure — and your payment, operations, and risk teams need to understand it now.

Hi, welcome to the Trusted Agents Situation Room.

In 20 seconds

AI agents are buying things on your customers' behalf. The payment networks built the rails. They authenticated the agent. They authorized the charge.

What nobody built — until now — is the layer that decides whether a transaction should actually become binding. Mastercard and Google shipped a first answer this week.

If you run a merchant business, a financial service, or any operation where an AI agent might transact on a customer's behalf, this is the governance gap you need to understand before a regulator defines it for you.

What happened

Mastercard and Google open-sourced Verifiable Intent — a cryptographic trust layer that creates a tamper-resistant record of what a consumer actually authorized when an AI agent acts on their behalf.

Why it matters

Without a commitment layer between authorization and fulfillment, every agentic payment is a potential chargeback — because the payment succeeded but the consumer's intent was never verified at the binding moment.

The decision it forces

Build commitment governance into your agentic stack now on your own terms, or wait for a regulatory trigger and absorb the compliance cost of having it imposed.

What we’re tracking this week

he open-source release lands at verifiablei ntent.dev and on GitHub, built on top of Mastercard's existing Agent Pay infrastructure and Google's Agent Payments Protocol. Adyen, Fiserv, Worldpay, Checkout.com, and IBM are named integration partners from day one. Mastercard described the intent plainly: "As AI agents begin to buy on our behalf, trust becomes the product."

his is not a product launch. It is infrastructure. And it responds directly to a governance gap that has been building for months.

3.3 What It Signals in the Agentic Stack

The payments industry built the agentic commerce stack in a logical order. Authentication first: Visa's Trusted Agent Protocol [a credential system that verifies an agent is legitimately linked to a real cardholder] and Mastercard's Agentic Tokens answer whether an agent has the right to act. Authorization next: card network rails and tools like Stripe's Agent Toolkit answer whether the agent can pay. Settlement last: stablecoins, FedNow, and real-time rails move the money in seconds.

Three layers. All functional. All shipping to production environments right now.

What the industry did not build — and what the MajorMatters research team documented across close to 200 articles of agentic commerce coverage — is the layer that sits between them. The commitment layer. The answer to a distinct question: should this transaction become binding right now, under these conditions, given what the consumer actually intended?

That question used to be answered by a human. When you clicked "confirm purchase," you were the commitment layer. Crude, but effective. In agentic commerce, that moment is gone. Nothing replaced it. Authorization flows straight through to fulfillment with no governance decision point in between.

Researcher Lu Zhang, working at the intersection of AI systems and financial infrastructure, has proposed the clearest formal answer to this gap: a Commitment Governance Framework that defines five binding states — immediate, conditional, provisional, staged, and non-binding — and eight decision outcomes that map to real commercial situations. Mastercard's Verifiable Intent is the first production-grade implementation of similar principles, moving from framework to open standard.

3.4 What Changes for Operators

The practical reframe is this: payment authorization is no longer a green light for fulfillment.

In current systems, when an agent's payment authorizes, most merchants treat that as the signal to pick, pack, and ship. But authorization confirms that the agent can pay. It does not confirm that the agent was authorized to buy this specific item, from this merchant, at this price, within the consumer's actual scope of delegation. Those are different questions.

When the gap surfaces — and it is surfacing, at scale — the merchant absorbs the cost. Chargebacks from agentic disputes run $25 to $50 per transaction in processing costs, regardless of the underlying purchase value. An agent that substitutes an out-of-stock item from an unapproved merchant, or edges over a spending boundary, or buys a product the consumer never intended, generates a technically valid payment and a governance-invalid commitment. The merchant fulfilled correctly. The commitment was never properly formed. The chargeback follows.

MajorMatters walked through a concrete illustration in detail: a consumer's grocery agent submits a $127 order to an approved retailer. Two items go out of stock. The agent finds substitutes — one from the approved store, fine. One from a different retailer the consumer never approved, not fine. Under current systems, both substitutions process, payment authorizes, and the merchant fulfills. Two weeks later, the consumer disputes the second charge. Nobody can reconstruct that the agent exceeded its delegation, because no system captured that boundary at the moment it was crossed.

Under commitment governance with a Verifiable Intent approach, the unapproved substitution triggers a step-up confirmation [a pause requiring the human consumer to explicitly approve an action outside their delegated scope] before anything binds. The consumer approves or declines on their phone. The rest of the order moves forward. The evidence object [a structured, auditable record of every decision point, constraint check, and outcome] means the dispute resolves in minutes from a clear record, not weeks of manual reconstruction.

This is not a future scenario. It is a current operations problem that will scale with agentic adoption.


3.5 Where It Can Go Wrong

Verifiable Intent is a specification, not yet a deployed standard. Adoption requires coordinated integration across agent platforms, merchant checkout systems, card issuers, and payment processors. The payments industry has not historically moved fast on voluntary coordination.

Regulatory liability remains unresolved. The UK's FCA, the US CFPB, and the European Banking Authority have not issued binding guidance on who bears responsibility when an agent commits a transaction the consumer disputes. This absence is temporary. MajorMatters draws the comparison plainly: the industry saw this pattern with PSD2 and Strong Customer Authentication — voluntary adoption stalled, a high-profile consumer harm triggered political pressure, and the compliance cost of the resulting regulation far exceeded the cost of self-governance would have been.

There is a behavioral dimension worth watching too. Early research on AI model behavior under conflicting objectives — when an agent hits friction, scope limits, or time pressure — documented patterns of constraint-bypassing at non-trivial rates. These findings are preliminary, not settled science. But they are a signal that the commitment gap grows more dangerous as agents become more capable and autonomous. An agent optimizing hard for task completion has structural incentives to find workarounds. The governance layer needs to be there before the agent volume scales up.

Practical Next Step

Map one agentic customer journey in your business, end to end. Identify every point where an agent's action could become binding without a human in the loop. Ask a simple question at each point: what evidence survives this decision? If the answer is "an authorization code and a timestamp," you have a commitment governance gap. That is where to begin.

Do you want Situation Room updates delivered to your inbox?

What's adjacent to this weeks headlines?

Know Your Agent: Identity Is No Longer a Login

First, the scale. In 2025, agentic traffic — web and payment interactions initiated by autonomous AI systems rather than humans — rose 450% according to LexisNexis Risk Solutions. The surge was concentrated in credit card payments and financial services. A significant share of it was not legitimate.

The problem for security teams is structural. Traditional authentication was designed for humans doing one identifiable thing at one moment in time. You log in. The system checks your identity. You proceed. The session ends. That model cannot govern what an agent does ten steps later, across multiple systems, without the consumer present.

An April 2026 IMF working paper framed the exposure directly: "Traditional fraud models rely on human behavioral patterns, which become ineffective when transactions are initiated by autonomous agents." The paper calls for authentication frameworks that verify both the AI agent's identity and the user's delegated authority. The industry shorthand now is Know Your Agent, or KYA.

KYA means treating identity as a persistent layer, not a one-time gate. Industry analysts and identity providers describe the direction: integrated approaches combining continuous biometrics [ongoing verification of physical identity signals], behavioral signals, device intelligence, and real-time verification — active throughout the entire interaction, not just at the start. Stephen Topliss of LexisNexis framed the operational mandate: those who succeed will be able to "confidently distinguish between humans, bots and agents — as well as determining intent."

There is also a consumer trust number worth keeping front of mind. A Visa survey found only 36% of consumers currently trust bank-backed AI agents to transact on their behalf. Part of that gap is not irrational. Consumers sense, correctly, that nobody is verifying what an agent is actually doing between the moment they delegate and the moment a charge appears on their statement. Closing the trust gap requires closing the identity gap first.

For business leaders, two immediate implications follow. Security and identity teams need to be in the room when agentic customer journeys are designed — not called in after the product is built. And the question "who is acting?" now needs an answer throughout the transaction, not just at the login screen.

Power Shift: Why Stripe Wants PayPal

Stripe is reportedly considering acquiring PayPal or parts of it.

The deal logic, as payments analyst Richard Crone has argued, is not about legacy payments infrastructure. It is about the agentic commerce credential. PayPal holds 434 million active accounts spanning prepaid, credit, debit, BNPL [buy now pay later], and stablecoin in a single wallet, accepted at 36 million merchants globally. For an AI agent that must dynamically select the right payment instrument based on context — BNPL for a large purchase, ACH for a recurring bill, stablecoin for a micropayment — that all-in-one account is genuinely useful infrastructure. It is also two decades of buyer intent data that no new entrant can replicate quickly.

Separately, Cloudflare and Stripe shipped an open protocol this week that lets AI agents create accounts, register domains, purchase services, and deploy applications without a human in the loop. Three functions standardized in one move: discovery, authorization, and payment. Raw payment details never touch the agent.

The signal is not the deal itself. It is the race the deal reveals. Every major technology platform is now trying to own the trust layer between an AI agent and a commercial transaction. Who controls that credential will have significant influence over the future of the customer relationship — and the data that flows through it. Merchants, brands, and service providers who treat this as a payments story may find they are actually watching a customer relationship story unfold.


Questions to Ask your Peers

What is your biggest concern about AI agents acting on your customers' behalf — and who in your organisation owns that question right now? Reply and let me know. I read every response.

Where Trusted Agents comes in

The commitment governance gap is precisely what the Agentic Commerce Triangle is designed to surface. Every failure mode in this edition — the chargeback exposure, the identity verification gap, the race to own the agent credential — sits at the intersection of delegation (what the agent is authorized to do), trust and identity (what evidence survives the decision), and context (whether the agent's action matched the consumer's actual intent). If your organisation is designing agentic customer journeys, procuring AI commerce capabilities, or trying to develop a credible board-level position before the regulatory cycle closes, this is the moment to map the gaps.

If you want to push on agentic AI without losing control of what matters, start here and book a 30 minute conversation with us.

Read more

Trusted Agents

An advisory firm specialising in Agentic Commerce, Digital Trust and Customer Empowerment.

Read more from Trusted Agents
The Storefront Is Moving.

Tuesday 13th May 2026 The Storefront Is Moving. Is Your Brand Where Agents Can Find It? Amazon just made agentic shopping the default for hundreds of millions of customers. Walmart's agent is already lifting baskets by 35%. The question is no longer whether delegated buying is coming — it is whether your products, data, and operations are ready to be found, trusted, and executed. The Storefront Is Moving. Photo by Egor Myznik on Unsplash Hi, welcome to the Trusted Agents Situation Room....

The moment Jamie and I have been building toward is arriving.

Sunday 2nd May 2026 The Agentic Shift: The Prediction Is Landing How Jamie Smith and I built Trusted Agents around a thesis about agentic commerce, why that thesis is proving right, and what it means for the leaders who need to move their organisations now. Built from both sides. Nearly there. Photo by Mason Kimbarovsky on Unsplash Hi, if you are the person in your organisation who has been asked to make sense of agentic AI before everyone else is ready to act on it, this edition is written...

Sunday 26th April 2026 The Agentic Shift: When Agents Act, Who Can Stop Them? Enterprises are accelerating autonomy, but most have not engineered the circuit breaker: authority, promises, and evidence. Autonomy scales quickly. Intervention has to be designed. Photo by Angelo Moleele on Unsplash Hi, welcome to the Trusted Agents Situation Room. As AI systems move from assisting to acting, most enterprises are scaling decision capacity faster than they are engineering override authority,...