A practical playbook for litigators on disclosure obligations under FRCP 26(g), validation protocols that hold up in court, and audit trails that survive a methodology challenge.
Built from the methodology used in over a dozen federal matters — including a $35M white-collar case and a $15.4M commercial verdict — where the AI-assisted review was never successfully challenged.
FRCP 26(f) meet-and-confer guidance, local rules requiring AI disclosure, and how to draft a methodology memo your judge will accept.
Random sampling methodology, agreement-rate thresholds for relevance vs. privilege, and how to document iterative validation rounds.
The Rule 26(g) certification problem, when senior attorneys must intervene, and a human-in-the-loop QC workflow that scales.
The complete documentation checklist — platform info, search terms, training data, validation results, and supervising attorney logs.
Courts and commentators consistently treat 90%+ as the floor for AI privilege classification, given the stakes of a waiver.
The accepted band for AI vs. attorney relevance coding — comparable to or better than human-vs-human reviewer agreement.
Computer-assisted review has been judicially endorsed for over a decade. The question today is not whether — it's how.
The use of artificial intelligence in document review has been judicially accepted since at least 2012, when Magistrate Judge Andrew Peck endorsed Technology Assisted Review (TAR) in Da Silva Moore v. Publicis Groupe. In the years since, TAR and predictive coding have become mainstream tools, and courts have repeatedly affirmed that computer-assisted review is not only acceptable but may be superior to manual review in terms of consistency and recall.
However, the emergence of Large Language Models (LLMs) introduces new concerns that the existing TAR case law does not fully address. Unlike traditional TAR systems, which use supervised machine learning to classify documents based on attorney-coded training sets, LLMs generate natural language responses, can "hallucinate" confident but incorrect analysis, and operate through mechanisms that are difficult to explain to a skeptical judge.
FRCP Rule 26(g) requires that every discovery response be signed by an attorney who certifies that it is "complete and correct as of the time it is made." This certification attaches to a person, not to a technology. When you use AI to assist document review, the signing attorney must be able to explain and defend the methodology...