Situation Room Update: When the Customer Arrives as an Agent


Tuesday 31st March 2026

When the Customer Arrives as an Agent

As AI agents begin to browse, compare, negotiate, and buy on behalf of customers, businesses need a clear way to recognise them, trust them, and set limits on what they can do

Hi, welcome to the Trusted Agents situation room. We help leaders decide where customer agents will show up first, what trust signals matter, and which control points need to be built before scale arrives. Because by the time this feels obvious, it will already be late.

Let’s start with a simple story.

New York City launched MyCity to help small business owners navigate permits, rules, and city processes. It was meant to be practical and trusted. But Reuters reported that the chatbot gave wrong answers on issues that could expose business owners to legal risk, including minimum wage, tipping, cash acceptance and scheduling. The city kept it live while fixes were being made.

That matters because the failure was bigger than bad answers.

The chatbot was speaking through an official city channel. It sounded close enough to authority that a user could reasonably assume it was safe to follow. Once an AI appears to speak for a city, a bank, a merchant, or a platform, the issue changes. The question is no longer only whether the answer is right. It is whether the system is acting with real authority, within real limits, and with clear accountability behind it.

That same shift is now coming to commerce.

In agentic commerce, the customer journey starts before checkout. An agent may discover products, compare offers, narrow choices, and shape the outcome long before a person lands on a payment page. Visa’s own materials make this point directly. Agents are influencing the shopping experience from the first interaction, not just at the end.

That creates a new decision for business leaders.

For years, merchants treated automated traffic as suspicious by default. That was sensible when most bots were scraping, probing, or abusing systems. But some automated traffic now represents real customer intent. A shopping agent looking for a room, a product, or a better price is different from a malicious script. So the business question becomes: how do we tell the difference, and what do we do when we can?

A useful way to frame the problem is through five questions.

Who is the agent?
Who authorised it?
What is it allowed to do?
What context is it using?
What can we prove afterwards?

If your team cannot answer those five questions, you do not yet have a trustworthy foundation for agentic commerce.

These questions are practical, not academic.

“Who is the agent?” is about recognition. “Who authorised it?” is about delegation. “What is it allowed to do?” is about scope. “What context is it using?” is about the data, history, and constraints shaping its decisions. “What can we prove afterwards?” is about auditability when a booking goes wrong, a payment is disputed, or a customer says, “I never approved that.”

This is where many AI discussions still fall short.

A lot of attention goes into whether agents can do the task. Much less goes into whether they should be trusted to do it, on whose behalf, and under what commercial conditions. For leaders, that is the more important question. Agentic commerce is not just a technology shift. It is a channel shift, a trust shift, and eventually a liability shift.

That is why AIS-1 is worth watching.

Its core idea is simple: pair the agent’s identity with the identity of the person or organisation responsible for it. That pushes the discussion beyond technical recognition and into something more useful for business. When an agent acts in the market, who stands behind it?

AIS-1 is also useful because it thinks in business tiers: Basic, Verified, and Sovereign. In plain English, that means not every agent deserves the same level of trust. A prototype assistant is one thing. An agent making commercial commitments, moving money, or operating in a regulated context is another. That is a helpful lens for leaders deciding where to allow experimentation and where to demand stronger proof and accountability.

You can already see the market moving in that direction.

Visa Trusted Agent Protocol (TAP) is one signal. It is focused on helping merchants distinguish trusted, commerce-focused agents from malicious bots at the edge of the interaction. That matters because it addresses a very practical merchant problem. If a real customer arrives through a new medium, you need some way to recognise that and respond differently.

The broader standards and policy world is moving too. You do not need to follow every acronym to take the point. Agent identity is shifting from an early debate into part of the market’s future infrastructure.

So what decisions should leaders make now?

First, decide where agentic commerce is most likely to reach you first. Search, comparison, service, booking, checkout support, loyalty, or post-sale care. Different sectors will feel this in different places, but very few will avoid it.

Second, decide which actions in those journeys are low-risk, medium-risk, and high-risk. Browsing and discovery are different from booking, payment, or regulated advice.

Third, decide what level of proof you need before an agent can cross each threshold. Recognition may be enough in one case. In another, you may need a stronger link to the customer, a clearer sponsor, or a fuller audit trail.

Then put the control points in place.

Give agents distinct identities rather than shared credentials. Make delegation explicit when they act for customers or employees. Set task-level limits. Log actions at the tool and transaction level. And ask one more question inside the business: who is the accountable sponsor when an agent acts? That question will matter more, not less, as agents move closer to real transactions and real decisions.

A simple readiness test will tell you where you stand.

Pick one journey where an external agent could plausibly arrive first. Then ask: could we recognise it, understand who it represents, know what it is allowed to do, and prove afterwards what happened? If the answer is no, this is already a business design issue, not just an innovation topic.

How Trusted Agents helps

We help leaders turn this shift into a business case quickly. We map what changes first over the next 12 to 18 months. Where revenue and distribution get reshaped. Where fraud and servicing pressure shows up. Where data and controls become the bottleneck. Then we turn that into clear bets and a sequence of pilots, so teams stop reacting to demos and start steering.

And we do not leave you with a deck. We help you ship. We bring in delivery partners to build working prototypes that survive the real world: payments, refunds, legacy integration, policy, identity, controls, and data quality designed in from day one.

The bigger point is simple.

When the customer arrives as an agent, businesses will need to decide whether to block, ignore, tolerate, recognise, or trust that new channel. The winners will be the organisations that made those decisions early, and built the trust layer underneath them before scale arrived.

Want the next Situation Room note?

Trusted Agents

An advisory firm specialising in Agentic Commerce, Digital Trust and Customer Empowerment.

Read more from Trusted Agents

Sunday 29th March 2026 The Agentic Shift: From Legal Work to Legal Systems Why the next phase of legal AI is about operating model, not just productivity A new trajectory for legal services Photo by Valentin Karisch on Unsplash You’re in the Trusted Agents Situation Room. Each post gives you the signal behind the headlines and the practical implications for strategy, operations, and trust. And when you’re ready, we help you build, not just plan, with an integrated delivery crew that can...

Wide-eyed, not malicious. Just optimising. That’s how agents create operational havoc.

Tuesday 23rd March 2026 The Situation Room is brought to you by Trusted Agents Free Cancellation Was Built for Humans The Missing Layer Is Agentic Resource Planning Wide-eyed, not malicious. Just optimising. That’s how agents create operational havoc. Photo by Michelle Tresemer on Unsplash If you gave a personal travel agent permission to save you money, what would it actually do? Most people assume the polite version: it watches prices, waits patiently, and books once when the price is...

A Lobster Roll

Sunday 22nd March 2026 The Agentic Shift: Inside the Tornado - Agent Time vs Enterprise Time OpenClaw is doing for agents what ChatGPT did for chat, and it’s widening the gap between “what’s possible” and what most enterprises can safely deploy. OpenClaw is the menu. The enterprise work is the kitchen. Photo by Alexander Grey on Unsplash In 20 seconds This week’s shift: agent capability is accelerating at “consumer speed”, while most enterprises are still working out what “safe enough”...