Last week, Thomson Reuters announced that CoCounsel had reached one million users across 107 countries and territories. At the same time, Anthropic unveiled an expanded suite of enterprise plugins for Claude, including specialized tools for legal, finance, and HR work.
These announcements, coming within hours of each other, crystallized what’s really happening in legal AI—and why a Wikipedia screenshot from weeks ago matters more than ever.
A few weeks back, a post from a founder on X made the rounds on LinkedIn. A general counsel had tested Anthropic’s Claude for contract review, and the AI had pulled information from Wikipedia.
Cue the hot takes. AI skeptics declared victory: foundation models aren’t ready for legal work. AI bulls shrugged it off as growing pains. Both sides missed what that screenshot actually revealed about where this market is heading.
I’ve spent years building AI for lawyers at Thomson Reuters. That Wikipedia moment wasn’t an AI failure. It was a systems failure. Understanding the difference determines who wins the next decade of legal tech—and this week’s announcements show that battle is intensifying.
The Missing Context
When that GC tested Claude, the system did exactly what it was designed to do: pull from available sources. No legal research database, no authoritative content, no firm precedents. Just the open web, which includes Wikipedia.
Most reactions split into predictable camps. One said foundation models can’t handle legal work. The other said models will improve. Both miss the real issue.
Claude and ChatGPT are remarkably capable. The problem isn’t intelligence, but whether the surrounding system is designed for the task at hand, combining authoritative sources, expert oversight, and practical safeguards.
This is an architecture problem.
The Anthropic Moment
Anthropic’s announcement makes this divide concrete. The company launched department-specific plugins, including one for legal work that can review documents, flag risks, triage NDAs, and track compliance. Companies can now connect Claude Cowork to Google Drive, Gmail, DocuSign, and other enterprise systems.
This is exactly the kind of move that rattled software stocks in February—our shares at Thomson Reuters fell more than 30% in the initial selloff. But when we announced CoCounsel’s one million users, our stock jumped 11% in its biggest single-day gain since 2009.
The market is starting to understand something important: there’s a fundamental difference between AI that can automate workflows and AI that can handle authoritative legal work.
The Real Divide in Legal AI
A lot of confusion in today’s legal AI debate comes from treating all legal work as the same when it isn’t. Legal work can be broadly divided into two categories: work that requires authority and work that doesn’t.
There is a large and valuable category of legal work that does not require authoritative legal sources. Lawyers and legal teams routinely use software to standardize formatting, compare contracts against internal playbooks, manage billing and timesheets, or automate internal workflows. None of that requires case law, statutes, or regulatory validation.
This is where products like Cowork, Harvey, and Legora largely operate today.
Why Cowork’s Legal Plugin Changes the Game
Anthropic’s legal plugin deserves special attention because it attacks the non-authoritative layer of legal work extremely well. By focusing on internal documents, workflows, and operational efficiency, it competes directly with most of the core use cases for the vertical startups.
With enterprise connectors to existing systems and the ability for companies to build custom plugins, Cowork is positioning itself as the operating system for legal operations work. That’s a direct threat to vertical legal AI startups.
But—and this is crucial—that does not make Cowork a substitute for systems designed to handle authoritative legal work. And conflating those categories obscures what’s really happening in the market.
Where Authority Actually Matters
Where things change is when legal work requires authority:
• Researching an unresolved legal issue
• Developing novel arguments
• Validating an agreement against statutes or regulations
• Producing work that must be cited, audited, and defended
These tasks require authoritative content and systems designed to manage risk, accountability, and trust.
This is where Thomson Reuters plays with CoCounsel.
When we built CoCounsel, we didn’t wrap a foundation model in a user interface. We integrated Westlaw’s database, containing millions of court decisions, statutes, and regulations curated over decades by legal experts. We connected Practical Law, with thousands of attorney-drafted practice notes and documents.
That content took decades and billions of dollars to build. It cannot be recreated through fine-tuning alone.
What the Wikipedia Screenshot Really Shows
The Wikipedia incident highlights what happens when AI without authoritative infrastructure is used for tasks that require it. You get hallucinations and errors, and most importantly, you lose trust.
This isn’t unique to Claude. Any system asked to perform authoritative legal work without authoritative sources will fail in similar ways—even with the most sophisticated plugins.
Why Organizing the Law Is So Hard
The law is messy. It’s fragmented across jurisdictions and much of it isn’t fully digital. It changes constantly.
At Thomson Reuters, we’ve built AI systems, data pipelines, and editorial workflows, and we employ thousands of legal experts to organize the law into a searchable, continuously updated system for both humans and machines. Many companies have tried to replicate this. Most have failed.
We welcome innovation because it makes us better, but it’s important to be honest about how hard this problem is.
What This Means for the Market
My belief is that the most valuable and high-stakes legal work requires authority. That is the AI we are building at Thomson Reuters—CoCounsel is now trusted by one million professionals in over 107 countries and territories for work where errors aren’t an option. We will continue to adopt the best tools and techniques, including innovations coming from foundation model providers like Anthropic, to deliver on that vision.
At the same time, companies like Harvey and Legora face an increasingly difficult strategic position. They now sit between incumbents with authoritative infrastructure, foundation model companies with enormous scale advantages, and Anthropic’s enterprise plugin ecosystem that can handle operational legal work. That is not an easy place to compete long term.
Anthropic’s move into legal plugins doesn’t threaten what we do—it clarifies it. The market is bifurcating into operational AI and authoritative AI. Both are valuable. But they’re not the same thing.
That Wikipedia screenshot doesn’t prove AI can’t do legal work. It proves that legal AI requires more than a smart model—even one equipped with sophisticated plugins.
It requires authoritative content, deep domain expertise, infrastructure, and governance systems designed for professional risk. Last week’s announcements from both Anthropic and Thomson Reuters prove this divide is real and growing.
The companies that understand this will win. The rest will eventually learn the hard way.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

