AI Review 2.0: Why Document-Driven Review Produces Better Results—and Puts You Back in Control
Originally published by EDRM and in JD Supra (2/16/2026)
By John Tredennick
The argument for AI-powered document review is all but over. Large language models are smarter, faster, more consistent, and less expensive than human review teams. I made that case in an earlier article published by EDRM and JD Supra, Six Reasons Why Your Best Document Reviewer Isn’t Human Anymore. Others across the industry have made these points as well.Â
But better results and real control over the review process don’t come from using an AI model alone. They come from the methodology—how you build, test, and refine the review criteria the AI applies. Get the methodology wrong and you get fast, consistent, wrong answers. Get it right and one experienced review manager can run a review that used to require hundreds of contract attorneys, with better accuracy and complete oversight from start to finish.
In this article, I describe Document-Driven Review, the methodology my team and I developed for ReviewPartner. I’ll explain why most AI review platforms are still guessing, what Document-Driven Review does differently, and why it puts you back in control of the process.
The Problem: Generic AI Review Is Guessing
Most AI review platforms start with a review protocol—a set of criteria defining what’s responsive or privileged—and feed it to a language model along with the documents. The AI reads each document, applies the criteria, and makes a determination. It’s fast, consistent, and scalable.
But here’s the problem nobody talks about: those platforms never examine your actual documents before building the criteria. They build the protocol from templates, from generic definitions of responsiveness or privilege, from abstractions. They don’t know what they don’t know.
Every document collection has its own patterns, its own edge cases, its own gray zones. A protocol that works well on 80% of documents may fail on the 20% that matter most—the ambiguous communications, the mixed legal-and-business discussions, the documents that sit right on the line between responsive and non-responsive.
Generic AI review treats the platform as a black box. Documents go in, determinations come out, and you hope the criteria were right. This is faster than human review, but it doesn’t solve the fundamental problem. You’ve just moved the guessing from hundreds of contract reviewers to a single untested protocol.
We know that an AI reviewer will best a human reviewer every time. But only if it is properly trained. The issue isn’t the AI’s capability. It’s the criteria it’s asked to apply.
Document-Driven Review: Built from Your Documents
ReviewPartner’s Document-Driven Review methodology works differently. Instead of applying a fixed protocol and hoping for the best, it examines a representative sample of your actual documents, discovers where the criteria fail, and refines the protocol through iterative cycles until it’s proven to work—all before full review begins.
I’ll walk through the four phases. Human experts control every phase. AI simply provides speed, consistency, and intelligence.
Phase 1: Protocol Development
This is where Document-Driven Review diverges from everything else on the market. The process begins with an initial review protocol based on the specifics of the matter. This protocol does not need to be perfect. The entire point is to discover and fix its weaknesses.
The system ranks all documents by likely relevance using a hybrid of vector similarity search, keyword ranking, and AI assessment integrated through a Continuous Active Learning algorithm. It then selects a stratified sample from across the relevance spectrum—clearly responsive, clearly non-responsive, and critically, documents in the gray zone where the criteria will be tested most severely.
Each sample document is analyzed by a high-capability language model that produces a responsiveness determination, a confidence score, and a content summary. But the critical output is an identification of where the review protocol is ambiguous or difficult to apply. The system doesn’t just classify documents. It tells you where and why the criteria are failing.
The system then synthesizes these findings, identifies patterns, and generates structured questions for the review manager—each with context, links to specific documents that illustrate the problem, and pre-formulated answer options. Based on the answers, the system enhances the protocol and re-analyzes the sample. This cycle typically repeats two to four times. With each pass, uncertain determinations decrease, confidence scores increase, and the protocol becomes more precise.
By the time the protocol is ready for full-scale review, it has been tested and refined against your actual documents. The protocol works because it was built from evidence, not assumptions.
Phase 2: Validation
Before deploying at scale, the system validates accuracy on a fresh set of documents. The AI provides its determination and documented reasoning for each document. The review manager reads the reasoning and agrees or disagrees, explaining any disagreements. When determinations diverge, the system categorizes each disagreement—AI error, protocol ambiguity, human inconsistency, or uncovered edge case—transforming raw disagreements into actionable intelligence. Full review doesn’t begin until the review manager is satisfied.
Phase 3: Full-Scale Review
The system deploys the validated protocol across the full collection. ReviewPartner supports linear review or CAL-prioritized review, which processes documents in order of likely relevance and produces significant cost savings for large collections with low richness. Multiple parallel instances of the language model process documents concurrently with perfect consistency. The thousandth document gets the same attention as the first.
Phase 4: Completion and Reporting
The system generates a complete documentation package: protocol version history, all questions asked and answered during development, validation metrics, document-level determinations with reasoning and confidence scores, and standard litigation load files. Every decision can be traced from the initial protocol through each refinement cycle to the final determination.
One Review Manager. No Review Team.
Here is the point that changes everything: you don’t need a review team anymore. You need a review manager working with a member of the trial team, and ReviewPartner. That’s it.
Think about what the system does. It samples the documents, identifies where the criteria are weak, and generates the exact questions needed to fix them. The review manager answers those questions. The system refines the protocol and tests it again. When the protocol is validated, the system deploys hundreds of identical AI reviewers that apply those criteria with perfect consistency across every document.
The same capability that used to require 200 contract attorneys now requires one or two people who know what they’re doing. And these aren’t just any people—the review manager is an experienced professional making substantive decisions about review criteria, not a project manager supervising a staffing operation. The trial team member provides the legal judgment about what matters in the case. Together, they control the entire process from protocol development through final production.
No staffing agencies. No contract reviewers you’ve never met. No strangers reading your confidential documents. No hoping that the team applied the right judgment to the right documents on the right day.
For law firms, this means bringing review back in-house as a core competency rather than outsourcing it. For corporate legal departments, it means direct oversight from start to finish. For legal service providers, it means offering AI-powered review as a new service line that requires fewer resources and delivers better results.
The traditional model asked you to give up control in exchange for scale. Document-Driven Review gives you both.
Why This Matters
Document-Driven Review addresses the gap between AI capability and AI accuracy in document review. The practical advantages are significant:
- Accuracy built from evidence, not assumptions. The protocol is tested against your actual documents and refined until it works. You know it works because you validated it before full review began.Â
- Edge cases resolved before they become errors. The system finds the gray zones during protocol development, not during production. Ambiguities are identified and resolved through structured questions—not discovered after the review is complete.Â
- Adaptive reasoning where it matters. Extended thinking for synthesis and question generation. Standard inference for high-volume document processing. The system matches reasoning depth to task complexity.Â
- Defensibility through transparency. Every protocol version, every question, every answer, every validation result is documented automatically. A complete audit trail from initial criteria through iterative refinement to final production.Â
- Human judgment where it counts. The review manager makes every critical decision—what the criteria should be, whether validation results are acceptable, when to proceed. AI handles the scale. Humans provide the judgment.Â
The Methodology Is the Product
The AI models will keep improving. Next year’s models will be more capable than this year’s. But capability without methodology is just faster guessing.
Document-Driven Review is the piece that was missing—the methodology that turns AI capability into defensible results. It’s the difference between hoping your review criteria work and knowing they do, because you tested them against your actual documents, refined them through multiple cycles, and validated them before a single production document was coded.
I’ve spent more than 40 years in legal technology and document review. I’ve never seen anything that changes the economics and quality of review as fundamentally as this. The technology exists. The methodology is proven. The results are measurable.
About the Author
John Tredennick (jt@merlin.tech)Â is CEO and Founder of Merlin Search Technologies, a company pioneering AI-powered document intelligence for legal professionals. A former trial lawyer and founder of Catalyst Repository Systems, he is recognized by the American Lawyer as a top six ediscovery pioneer and has been involved in legal technology and document review for more than 30 years.