For most of the 2000s, the question that stalled predictive coding adoption was not whether the technology worked — the academic literature on classifier-based document review had been favorable for years — but whether a producing party who used it would be sanctioned for departing from the keyword-search orthodoxy. That question had no answer until February 2012, when Magistrate Judge Andrew J. Peck of the Southern District of New York issued his opinion in Da Silva Moore v. Publicis Groupe & MSL, 287 F.R.D. 182 (S.D.N.Y. 2012), the first reported United States judicial opinion to approve computer-assisted review.
The matter was a Title VII gender discrimination action against a global advertising and public relations group. MSL had collected roughly three million emails and proposed an iterative predictive-coding protocol on the Axcelerate platform: a 2,399-document random-sample baseline, seven training rounds, and a final random sample of discarded documents to test for erroneously excluded responsive material. Plaintiffs objected that predictive coding lacked generally accepted reliability standards. Judge Peck disagreed. The operative holding is one sentence at page 191: "This judicial opinion now recognizes that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases." Da Silva Moore, 287 F.R.D. at 191.
The reasoning that supported that sentence is what made the opinion durable. Judge Peck did not claim that TAR was perfect. He observed that manual review and keyword searching have well-documented human-error and inconsistency problems, and that no method — manual, keyword, or predictive — guarantees recall of every responsive document. He approved the MSL protocol because it contained concrete quality controls: a documented seed-set methodology, iterative training, disclosure of seed documents to opposing counsel, and random sampling of discarded documents to estimate recall. The opinion grounded that approval in Federal Rule of Civil Procedure 1's commitment to a just, speedy, and inexpensive determination, and in the proportionality language of Rule 26(b)(2)(C). After Da Silva Moore, the contested issue in TAR disputes was no longer whether courts would permit it — it was how the protocol had to be designed to be defensible.
Three years after Da Silva Moore, Judge Peck returned to the same subject in Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125 (S.D.N.Y. 2015). The procedural posture was different: the parties had jointly negotiated a Predictive Coding Protocol covering use of Relativity Assisted Review, control sets, seed and training sets, vendor and expert disclosure, and statistical validation through precision and recall estimates. They asked the court to enter the protocol as an order. Judge Peck did, and used the opportunity to update the law.
The single most-cited line in the opinion is at page 127: "computer-assisted review now can be considered judicially-approved for use in appropriate cases." Rio Tinto, 306 F.R.D. at 127. Read alongside the parallel sentence in Da Silva Moore, the message is unmistakable — in the three years between the two opinions, predictive coding had moved from "permissible in appropriate cases" to a default that no longer required special justification. Practitioner commentary (and Judge Peck's own informal remarks) characterized the opinion as treating TAR as "black letter law" in the Southern District of New York.
The second important line in Rio Tinto is the warning that "it is inappropriate to hold TAR to a higher standard than keywords or manual review." Id. That sentence cuts off a category of objection that producing parties had previously faced — the demand that TAR achieve a perfection or transparency standard that no manual review process actually meets. Judge Peck reasoned, citing Sedona Principle 6, that parties are best situated to select methodologies for their own ESI production, that proportionality under the 2015 Rule 26 amendments rewards efficient technology, and that a heightened standard would deter exactly the workflow Rule 1 was meant to encourage. Rio Tinto attached the parties' protocol as a schedule, and the protocol has been reused as a template in dozens of subsequent matters.
The marketing version of TAR case law tends to stop at Rio Tinto. The litigation version cannot, because eighteen months later Judge Peck wrote the opinion that establishes the outer limit of TAR's judicial endorsement: Hyles v. City of New York, No. 10 Civ. 3119 (AT) (AJP), 2016 WL 4077114 (S.D.N.Y. Aug. 1, 2016). The plaintiff in an employment discrimination action moved to compel the City of New York to use TAR rather than keyword search. The City preferred keywords and had not yet incurred meaningful costs on its chosen method. Judge Peck refused to compel.
The operative language is unusually direct. The court framed the question as "whether, at plaintiff's request, the Court should force the responding party (defendant City) to use TAR" and answered "The short answer is a decisive 'NO.'" Hyles, 2016 WL 4077114, at *1. The order concludes: "Hyles' application to force the City to use TAR is DENIED." Id. at *4. The reasoning rests on Sedona Principle 6 — that responding parties are best situated to evaluate the procedures and technologies for producing their own ESI — and on the standard of "reasonableness, not perfection." Judge Peck acknowledged TAR is often superior, but held that keyword searching has not yet become so unreasonable that declining TAR is sanctionable.
Hyles is the case that in-house counsel and litigators need to read alongside Da Silva Moore and Rio Tinto, because it constrains both sides of the TAR conversation. A requesting party cannot use the line of TAR endorsements to force a producing party off keyword search. A producing party cannot read Hyles as a license to choose an unreasonable method — the court took pains to note that the City's keyword approach was reasonable on the facts. The synthesis is that TAR is judicially approved, judicially encouraged, and judicially unmandated. The choice belongs to the producing party, and the burden of demonstrating reasonableness travels with that choice.
The English High Court reached the same destination as Da Silva Moore four years later, by a different route. In Pyrrho Investments Ltd v MWB Property Ltd & Ors [2016] EWHC 256 (Ch), Master Matthews delivered the first English judicial endorsement of predictive coding for electronic disclosure. The matter involved approximately 3.1 million documents after de-duplication. The parties had already agreed to use predictive coding software and asked the court to approve the protocol formally. Master Matthews did, holding that doing so "would promote the overriding objective set out in Part 1 of the CPR." [2016] EWHC 256 (Ch) at [33].
What distinguishes Pyrrho from its US counterparts is that Master Matthews enumerated ten reasons in support of the approval, and those ten factors have functioned ever since as a judicially-endorsed checklist for English disclosure practitioners proposing TAR. They are: (1) experience in other jurisdictions shows predictive coding can be useful in appropriate cases; (2) no evidence suggests predictive coding is less accurate than manual review and some evidence suggests it is more accurate; (3) it provides greater consistency by applying a single senior lawyer's approach across the entire dataset; (4) nothing in the CPR or Practice Directions prohibits its use; (5) the volume of documents was huge; (6) full manual review would be unreasonable in cost terms; (7) the estimated costs of predictive coding were significantly lower; (8) the value of the claims (tens of millions of pounds) made the costs proportionate; (9) the trial date was far enough away to permit an alternative method if the software failed; and (10) the parties had already agreed on the software and the protocols. Id. at [33].
The practical effect of Pyrrho in England and Wales is that a TAR proposal aligned with the Matthews factors is essentially pre-approved. The factors translate cleanly into the Disclosure Pilot Scheme framework now embedded in PD 57AD: proportionality, the volume and value test, the availability of alternative methods, and party agreement on the protocol. Subsequent English decisions have followed Pyrrho as the foundational authority for TAR, and practitioner guidance treats the ten reasons as the minimum content of an English TAR proposal. The case is to English disclosure what Da Silva Moore is to United States discovery: the sentence that closed the question of whether the technology was permissible at all.
If Da Silva Moore made TAR permissible and Rio Tinto made it ordinary, In re Broiler Chicken Antitrust Litigation, No. 1:16-cv-08637, 2018 WL 1146371 (N.D. Ill. Jan. 3, 2018), is what made TAR defensible in the modern sense. Magistrate Judge Jeffrey T. Gilbert, with the assistance of Special Master Maura R. Grossman, issued a comprehensive Order Regarding Search Methodology for Electronically Stored Information that has since become the working template for TAR validation. The Order's most consequential feature is its Validation Protocol, which applies "regardless of whether technology-assisted review ('TAR') or exhaustive manual review ('manual review') was used by the producing Party." Broiler Chicken, 2018 WL 1146371, at *5.
The protocol is detailed where prior orders were general. It requires producing parties to disclose the TAR or CAL software and vendor, a general description of the training process, the categories of documents included and excluded, and the quality-control measures used. It mandates a Validation Sample reviewed and coded by a subject-matter expert "knowledgeable about the subject matter of the litigation," and it specifies that the validation must be blind — the SME reviews the sample without knowledge of the prior coding. Id. The Order ties the entire workflow to Federal Rule of Civil Procedure 26(g), requiring that "the review process should incorporate quality-control and quality-assurance procedures to ensure a reasonable production." Id. Numeric recall is part of the test but, as the court noted, not the sole factor; the materiality of any missed documents matters too.
The reason Broiler Chicken matters more than its citation count suggests is that it converts what had been a vague exhortation to "validate your TAR workflow" into a specific, court-tested procedure with sample-size formulas in Appendix A and an explicit blind-SME review step. Subsequent courts — including, notably, In re Valsartan, Losartan, and Irbesartan Products Liability Litigation (D.N.J. 2020) — have referenced the Broiler protocol as a layered validation example. For producing parties, the practical implication is that a workflow modeled on the Broiler protocol is now the path of least resistance to a defensible production. For requesting parties, it is the document to cite when the validation methodology proposed by the other side is thinner than what a court would actually accept.
The technology-competence backstop: Behind every TAR validation conversation sits Victor Stanley, Inc. v. Creative Pipe, Inc., 250 F.R.D. 251, 261–62 (D. Md. 2008), in which Magistrate Judge Paul W. Grimm held that the burden is on the producing party to prove its search methodology was reasonable, and that defendants who failed to document their keyword selection, lacked quality assurance and sampling, and abandoned clawback protections could not prove reasonableness. Victor Stanley I is the case that put the affirmative duty of search-methodology competence onto the producing party in the first place — whether the chosen methodology is keyword search or TAR. The validation requirements in Broiler Chicken are, in effect, the modern operationalization of that duty.
The case-law arc from Da Silva Moore through Broiler Chicken describes a workflow, not just a technology: documented training, iterative refinement, statistically valid sampling, blind SME validation, recall and precision estimates, and a written record of the entire process that can be produced if challenged. DecoverAI's Relevance Detection is built around exactly that workflow. Every review project tracks the prompt and model used for classification, the sample of documents the SME reviewed, the agreement and disagreement counts, and the resulting recall and precision estimates — all exportable as a validation report that maps onto the Broiler Chicken protocol.
The point of building validation into the platform rather than treating it as a downstream add-on is that Hyles only protects the producing party who can show their chosen method was reasonable. The reasonableness inquiry asks whether the review was documented, sampled, and quality-controlled — which is exactly the evidence the platform needs to produce on demand. Our approach to that documentation problem is detailed in the companion piece on DecoverAI's LLM evaluation framework, which sets out the precision/recall measurement methodology we use across every Relevance Detection project. The methodology is grounded in the same statistical principles cited in the Broiler Chicken Appendix A sample-size formulas.
The pricing follows the same logic. DecoverAI is $60/GB/month, all-in, and AI-classification, validation reporting, privilege log generation, redaction, and production are included — there is no separate fee for the validation step that the case law now effectively requires. In the Tax Credit Investigation case study, that workflow processed 30,000 documents in three days with full validation documentation and a 98% cost reduction relative to the traditional vendor estimate. For the broader picture of how AI-augmented document review changes the economics of the matter, see the companion benchmark piece. The shift from Da Silva Moore to Broiler Chicken took six years. The shift from Broiler Chicken to a validated workflow that runs by default in every review is what 2026 looks like.