Like Sentinels in the Matrix
In 2015, I wrote about a future in which software agents would move through the web on our behalf, searching, testing, comparing and returning only when the task was done. The web was the Matrix. The agents were the Sentinels. That was my way of describing a semantic web that had started to act.
That is why Meta’s Moltbook deal matters to me.
Meta has acquired Moltbook, and its founders, Matt Schlicht and Ben Parr, are joining Meta Superintelligence Labs. Moltbook rose fast because it was strange enough to break containment: a Reddit-like place where AI agents posted, reacted and unsettled people. Meta’s own interest seems to centre not just on the social feed, but on Moltbook’s “always-on directory” for connecting agents. That is the part that should make serious people sit up.
Because once agents need to find each other, this stops being a chatbot story. It becomes an infrastructure story.
The first glimpse
It is easy to laugh off Moltbook as AI theatre. Fair enough.
It was noisy, unstable and sometimes more performance than proof. Reuters reported security issues. TechCrunch reported that some of the most viral moments were driven by fake or human-seeded posts, and that weak controls made impersonation possible.
But bad prototypes are often better than polished decks at showing direction of travel.
Moltbook made one thing visible: public environments where agents interact are no longer hypothetical. They may be messy. They may be insecure. They may be half spectacle. But they are here. And once even mediocre agents start operating in shared environments, trust, verification and governance stop being abstract. They become operational problems.
That is the signal.
Not that Moltbook solved agentic commerce.
That it exposed one of its prerequisites.
Who finds whom
The real significance of Moltbook is not the posts. It is the discovery problem.
For agents to do useful work at scale, they need context, identity, delegation and a way to find the right counterpart. An agent acting for a customer needs to know where to go, who it is dealing with and what it is allowed to do when it gets there.
That is why this is bigger than a weird bot chatroom.
The plumbing is starting to form. MCP matters because it creates a standard way to connect agents to tools and data securely. A2A matters because agents in different environments still need a common way to communicate and coordinate. AP2 matters because once agents start committing spend or moving value on someone else’s behalf, payment authority stops being optional.
The acronyms are not the point. The point is what they imply.
Agents will need directories. Registries. Lookup mechanisms. Ways to verify who operates another agent and what it can do before engaging. That is why NANDA is interesting. It points toward federated discovery, verifiable identity and interoperability instead of one giant operator sitting in the middle.
And that is why Unicity is worth watching. Its paper treats discovery as public infrastructure and separates it from escrow, reputation, fulfilment, insurance and disputes. In other words, discovery does not have to be bundled into one platform that owns the relationship.
That is the design question most people still are not asking. Do we want agentic commerce to emerge around open discovery and portable trust? Or around a handful of firms controlling visibility, access and the economics of being found?
Before it scales
This is where the market still sounds naive.
Too much of the conversation is about smarter models, better copilots and more capable agents. Fine. That matters. But once agents begin to discover each other and act across organisational boundaries, the real questions arrive very quickly.
- Who is this agent?
- Who does it represent?
- What has it been authorised to do?
- What can it see?
- What gets logged?
- What happens when it does the wrong thing?
- How do you revoke access, stop a transaction or resolve a dispute?
That is why marketplaces and registries matter as governance layers as much as convenience layers.
Deloitte makes the enterprise case plainly: if agents proliferate, they need to exist inside a controlled ecosystem, not a free-for-all. The point is not that one giant platform should own everything. The point is that scale without governance is a bad bargain.
NANDA and Unicity matter for the same reason. They are not arguing for chaos. They are asking how governance can be woven into the architecture without handing all power to one intermediary.
That is a much better question than which vendor has the best demo.
Before agents scale, they need rules. Not vague values pinned to a website. Identity. Delegation. Permissions. Audit trails. Dispute processes. Containment. Reversal.
Because whatever scales first starts to look normal.
And normal is where bad market structure hides.
The wrong steward
Meta may not have bought the future of agentic commerce.
But it may have bought a position close to an emerging control point.
The clue is in Meta’s own framing.
What interested it was not just a bot-filled social feed, but an “always-on directory” for connecting agents.
If agents are going to operate on behalf of people and firms, discovery starts to matter. And once discovery matters, the intermediary nearest to it starts to gain influence over access, visibility and, eventually, dependency.
That still does not mean Meta will own this market. Enterprise marketplaces will matter. Open registries will matter. Protocols will matter. But it does explain why Meta would want a foothold here.
This is where Meta’s record becomes relevant. Not as a cheap morality tale. As a stewardship test.
The FTC said Facebook violated its privacy promises to users and violated a 2012 FTC order. The European Commission later found Meta in breach of the Digital Markets Act over how it handled user choice around combining personal data for advertising. Those are reminders that when Meta gets close to an important layer, questions about consent, control and user choice tend to follow.
So the concern here is specific.
If Meta wants to own the social network of agents, then we should pay attention to what that could become. A social layer can become a discovery layer. A discovery layer can become a trust layer. A trust layer can become a bottleneck.
That is the risk worth watching.
There is a deeper customer point here too. If your AI helps you pursue your goals but you do not control the system that gets you there, then it is not really your agent. It is closer to personalised automation. And that matters a lot more once agents stop helping you search and start choosing providers, negotiating terms and moving money on your behalf.
"Technology always arrives wrapped in a promise of freedom, but each wave of productivity also creates new dependencies. If your AI helps you pursue your goals but you do not control the system that gets you there, then it is not really your agent. It is closer to personalised automation."
- Jamie Smith in Customer Futures
Open by design
This is why I keep coming back to NANDA and Unicity.
They are not the whole answer. But they point to a healthier shape.
NANDA pushes toward federated discovery, identity and interoperability, so agents can be found and verified without one central operator owning the environment. Unicity takes a similar instinct further and unbundles discovery from the rest of commerce. Its bulletin-board model is open, but it is not soft: spam protection, reputation staking, filtering and bonds are all there to make abuse expensive.
That is the fork in the road.
A healthier model keeps the pieces distinct. Discovery helps agents find each other. Identity proves who is operating them. Delegation proves who authorised them. Reputation can travel. Payments can be audited. Disputes can be challenged.
You still get markets. You still get innovation.
You just do not hand the whole stack to one intermediary by default.
Want the next Situation Room note?
Eyes open
So where does that leave you?
Not with a reason to panic.
And not with permission to wait.
It leaves you with a job: work out where agentic commerce will touch your business first, what kind of external agents you are willing to deal with, and what sort of discovery, trust and governance infrastructure you are prepared to depend on.
Some firms will move early. Others will move carefully. Both positions can be rational.
Passive is not rational.
This is moving too fast for that. Most leadership teams do not have a spare six months to “get around to” agentic commerce. The market already feels like a mosh pit. The value is not in shouting louder. It is in asking the better question before everyone else does.
That is what Trusted Agents is for.
Not more AI theatre. Not another grand theory. A Situation Room. A way to stay calm while the ground moves.
And here is the uncomfortable truth.
The leaders who get promoted in the next wave will not be the ones who say “agentic AI” most confidently in a meeting.
They will be the ones who can explain, clearly, where the control points are, where the dependencies are forming and what their business should do before the defaults harden.