The Agentic Shift: From Legal Work to Legal Systems


Sunday 29th March 2026

The Agentic Shift: From Legal Work to Legal Systems

Why the next phase of legal AI is about operating model, not just productivity

You’re in the Trusted Agents Situation Room.

Each post gives you the signal behind the headlines and the practical implications for strategy, operations, and trust.

And when you’re ready, we help you build, not just plan, with an integrated delivery crew that can prototype fast and take it to production.

In 20 seconds

Legal AI is moving beyond drafting and research. It is beginning to reshape how legal work is delivered, how firms are priced, how trust is evidenced, and how responsibility is assigned when software starts acting inside real workflows.

What happened

This week’s clearest signal came from Sabastian Niles, President and Chief Legal Officer of Salesforce, writing at the Harvard Law School Forum on Corporate Governance that AI is now a business-model catalyst for law, not a future-side capability.

Why it matters

Once clients start expecting measurable gains in speed, insight, transparency, and value, AI stops being a software decision and becomes an operating-model decision. Niles’ point is that the pilot phase is over, and firms now need to turn AI capability into something clients can actually trust and buy.

The decision it forces

Do you treat AI as a faster assistant inside the existing model, or do you start redesigning the firm around machine-executed workflows, auditable controls, and new forms of revenue?

What we’re tracking this week

  • Harvard / Salesforce: the market is moving from isolated AI tools to what Niles calls trusted agentics, with governance, oversight, unified client intelligence, and traceable decision paths becoming part of the service expectation.
  • Todo.Law / Gavel: as autonomous transactions become normal, the dispute layer becomes part of the legal stack. Their framing is crisp: agents transact, disputes are inevitable.

Do you want Situation Room updates delivered to your inbox?

Law is moving from expert labour to trusted execution

Sabastian Niles used the Harvard Law School Forum to make a much larger argument than “firms should adopt AI.” His point was that legal AI is now reshaping how firms create value, manage risk, earn trust, and compete. He argues that the era of the AI pilot is over, and that firms now need accountable, governed, client-facing AI rather than a scattering of disconnected tools.

That pressure is already visible in the market. Global Legal Post reported in November that Clifford Chance was proposing cuts to London business-services roles, with greater use of AI cited among the drivers. The same report said 8% of clients were already specifying generative AI use in tender documents, that 45% of the top 20 UK firms now had a head of AI, and that more than three-quarters had in-house teams driving AI transformation.

I kept thinking about a recent conversation with Sergio Maldonado, founder of Todo.Law, and also a practising lawyer and tech entrepreneur. We were discussing the similarity between code and contracts. Both use precise language to encode intended behaviour. Both are interpreted repeatedly by a third party, whether that is a court or a processor. And now both can be generated, at least in draft form, through natural-language systems. That does not remove the human role. It raises the value of human judgment around edge cases, incentives, and real-world consequences. Sergio matters here for another reason too: Todo.Law is building infrastructure for the kind of disputes these agentic systems will create.

What it signals in the agentic stack

The deeper signal is that legal AI is moving through three stages.

First, it helps professionals produce work faster.
Second, it starts to sit inside workflows and shape how work is executed.
Third, it forces firms to rethink trust, evidence, and accountability when software is no longer just suggesting text but influencing decisions and actions.

That is why Niles’ argument about trusted agentics matters. In the Harvard piece, agentic systems require multiple layers working together, plus governance, auditability, and traceable decision paths. This is not just better chat. It is early operating-model design.

What changes for enterprises

This is where the revenue model starts to move.

In the old model, a firm drafts contracts, negotiates terms, and then steps back in when a dispute or enforcement issue appears. Revenue is tied largely to human time and episodic matters. In the emerging model, part of the value sits in maintaining a living view of the client’s contractual position and legal exposure. A firm that monitors a repository against new case law, flags material shifts in interpretation, and advises on remediation is not just selling expertise by the hour. It is selling an ongoing service layer built on context, monitoring, and judgment.

This goes further than pressure on the billable hour. It points to a shift from reactive legal service to persistent legal intelligence, where a firm is no longer paid only to draft, negotiate, defend, or prosecute, but to keep the client continuously informed of how legal risk is changing. That is not just a different way to price the same work. It is a different category of service.

Where it can go wrong

The obvious failure mode is still the public one: hallucinated citations, invented authorities, and weak review. Niles points to the fragility of trust when firms cannot show rigor, guardrails, and human oversight around AI-assisted work.

But the deeper risks sit lower in the stack. Bad context. Poorly scoped delegation. No evidence trail. No escalation path. No clear answer to what happened when an agent acted wrongly but within its permissions. Those are the points where legal AI stops being a tooling issue and becomes an operating-model issue.

Practical next step

Firms should stop asking only where AI can save time and start asking which services could become continuous, monitored, and evidence-led, what controls those services require, and how those services should be priced once the value sits in ongoing risk visibility rather than hours worked.

Read more

What's underneath this weeks headlines?

When agents act, the dispute layer becomes legal infrastructure

Earlier this week, Trusted Agents pushed a provocation to the travel industry: free cancellation, customer service, and post-booking remediation were designed around human behaviour, human misunderstanding, and human channels of recourse. The moment software starts acting on delegated intent, those assumptions begin to break. That is not just a customer-experience issue. It is a legal and evidential one.

Charles M’s piece at Major Matters makes the gap very clearly. The stack is filling in fast across discovery, trust, payment rails, wallets, and merchant integration, but there is still no meaningful dispute layer for what happens after an agent-initiated transaction goes wrong. His line that this function has “zero coverage” is hard to ignore, because it describes the exact part of the system where trust is actually tested.

That matters for legal leaders because agent disputes do not fit today’s chargeback machinery. Major Matters argues that existing evidence types such as IP geolocation, device fingerprints, browser data, and session logs simply do not exist when the buyer is a software agent, and that new failure modes such as scope drift, intent mismatch, cascading errors, and cross-agent conflicts have no clean reason codes or adjudication path.

So the real point is larger than payments. Once autonomous systems transact, negotiate, or execute on behalf of users, post-transaction dispute resolution stops being back-office plumbing and becomes part of the legal stack itself. The question is no longer just whether an agent was authorised to act. It is whether the system can prove what it was asked to do, what it actually did, and how liability should be assigned when the two do not match.

Why Harvard and Todo.Law are really saying the same thing

The Harvard piece and Todo.Law may look as though they sit at different levels of the stack, but they are making the same argument from opposite ends.

First, both treat trust and governance as product requirements, not compliance paperwork. Niles argues for accountability, oversight, risk management, unified client intelligence, and traceable decision paths. Gavel turns that logic into infrastructure: structured evidence, legally binding rulings, escrow enforcement, transparent audit trails, and configurable escalation to human arbitrators.

Second, both assume autonomous action is normal and failure is inevitable. Harvard frames agentic AI as systems that act, decide, and execute at scale. Todo.Law starts with the simpler line: agents transact, disputes are inevitable. One is strategy language. The other is product language. But they describe the same world.

Third, both land on the same operating model: automate by default, escalate when necessary. Harvard calls for guardrails and human oversight around agentic workflows. Gavel implements that directly by resolving routine disputes automatically, escalating high-value or ambiguous cases to licensed human arbitrators, and carrying full evidence into that handoff.

Put simply, Harvard is describing the management doctrine of the agentic legal era, while Todo.Law is showing what one slice of that doctrine looks like when turned into executable infrastructure. That is why this matters beyond commerce. Legal leaders are going to need to think far more concretely about where evidence lives, how decisions are traced, when humans step in, and what counts as an enforceable outcome once software becomes an actor in the workflow.

What this means in Practice

The shift in legal is no longer about whether AI can help. It is about which parts of legal service become continuous, governed, and machine-executed first, and which firms prepare early enough to shape that change rather than absorb it late.

For firms in regulated industries, speed is not the same thing as haste. They cannot move recklessly, especially when clients depend on them for continuity, assurance, and defensible judgment. But the technology is moving quickly, and firms still need a way to prepare for adoption as the applications mature and the safety, control, and governance layers become fit for purpose.

If this week’s shift is that legal AI is no longer just a tool story but an operating-model story, then the need is not another generic AI workshop. It is a practical path that helps leadership teams understand what is changing in trust, governance, delegation, evidence, and revenue model, and then turn that understanding into something concrete.

That is where Trusted Agents fits: helping leadership teams explain the shift clearly, make strategic choices about where value and risk will move first, and build production-ready prototypes grounded in real controls. Not a deck. Not a demo. A working prototype shaped for the real world, with governance, identity, policy, data quality, and human oversight designed in from the start.

Come and see us at Trusted Agents

3 Organizations making this real

  • Todo.Law is building the dispute layer for agentic transactions. Gavel is designed for machine-speed dispute resolution, with structured evidence, binding arbitration, escrow enforcement, and human escalation where needed. It matters because it focuses on the part of the stack that only becomes visible when autonomous transactions fail.
  • Luminance is building contract intelligence as an operational layer across the enterprise. Its platform automates drafting, negotiation, review, and compliance work, turning contracts into a live source of context rather than a static record. It matters here because it shows how legal AI moves from assistance into execution.
  • Ironclad is building AI-assisted contract lifecycle management inside everyday business workflows. Its platform helps teams generate, review, negotiate, and manage contracts with more structure, visibility, and speed. It matters because it shows how legal work becomes embedded, repeatable, and increasingly machine-supported inside the flow of business.

Disclaimer: we have no commercial interests in any of these organisations. We are tracking them because they are building parts of the infrastructure layer that will unlock agentic commerce.

One question to take back to the firm

A useful question for partners, innovation leads, knowledge teams, and risk stakeholders is this: which part of legal work do we believe clients will expect to become AI-native first: research, drafting, contract monitoring, or dispute handling?

The shift in legal is no longer about whether AI can help. It is about which parts of legal service become continuous, governed, and machine-executed first, and which firms prepare early enough to shape that change rather than absorb it late.

Trusted Agents

An advisory firm specialising in Agentic Commerce, Digital Trust and Customer Empowerment.

Read more from Trusted Agents
When the customer arrives as an agent. The story of NYC's MyCity Chatbot

Tuesday 31st March 2026 When the Customer Arrives as an Agent As AI agents begin to browse, compare, negotiate, and buy on behalf of customers, businesses need a clear way to recognise them, trust them, and set limits on what they can do When the customer arrives as an agent, businesses will need to decide whether to block, ignore, tolerate, recognise, or trust that new channel. Photo by David Hurley on Unsplash Hi, welcome to the Trusted Agents situation room. We help leaders decide where...

Wide-eyed, not malicious. Just optimising. That’s how agents create operational havoc.

Tuesday 23rd March 2026 The Situation Room is brought to you by Trusted Agents Free Cancellation Was Built for Humans The Missing Layer Is Agentic Resource Planning Wide-eyed, not malicious. Just optimising. That’s how agents create operational havoc. Photo by Michelle Tresemer on Unsplash If you gave a personal travel agent permission to save you money, what would it actually do? Most people assume the polite version: it watches prices, waits patiently, and books once when the price is...

A Lobster Roll

Sunday 22nd March 2026 The Agentic Shift: Inside the Tornado - Agent Time vs Enterprise Time OpenClaw is doing for agents what ChatGPT did for chat, and it’s widening the gap between “what’s possible” and what most enterprises can safely deploy. OpenClaw is the menu. The enterprise work is the kitchen. Photo by Alexander Grey on Unsplash In 20 seconds This week’s shift: agent capability is accelerating at “consumer speed”, while most enterprises are still working out what “safe enough”...