White Paper

How to Use AI for Document Review Without Getting Sanctioned.

A practical playbook for litigators on disclosure obligations under FRCP 26(g), validation protocols that hold up in court, and audit trails that survive a methodology challenge.

Length
28 pages
Read time
22 minutes
Published
March 2026
Format
PDF
What you'll learn
  • How to disclose AI use under FRCP 26(f) without inviting challenges
  • Validation protocols for relevance vs. privilege classifications
  • Sample agreement-rate thresholds courts have accepted
  • The "Trust but Verify" framework for human-in-the-loop review
  • Documentation checklist for a defensible audit trail
DecoverAI · 2026
White Paper / 01
Defensible AI Document Review
FRCP 26(g) · Validation · Audit Trails
Get the full white paper
28 pages, delivered to your inbox. No spam — just the report.
By downloading, you agree to receive occasional updates from DecoverAI. Unsubscribe anytime.
What's inside

A four-step framework for defensible AI review.

Built from the methodology used in over a dozen federal matters — including a $35M white-collar case and a $15.4M commercial verdict — where the AI-assisted review was never successfully challenged.

01

Understand Your Disclosure Obligations

FRCP 26(f) meet-and-confer guidance, local rules requiring AI disclosure, and how to draft a methodology memo your judge will accept.

02

Establish a Validation Protocol

Random sampling methodology, agreement-rate thresholds for relevance vs. privilege, and how to document iterative validation rounds.

03

Maintain Human Oversight

The Rule 26(g) certification problem, when senior attorneys must intervene, and a human-in-the-loop QC workflow that scales.

04

Document Everything

The complete documentation checklist — platform info, search terms, training data, validation results, and supervising attorney logs.

Key Findings

What the case law and the data actually say.

90%
Privilege agreement threshold

Courts and commentators consistently treat 90%+ as the floor for AI privilege classification, given the stakes of a waiver.

80–90%
Relevance agreement threshold

The accepted band for AI vs. attorney relevance coding — comparable to or better than human-vs-human reviewer agreement.

2012
Da Silva Moore precedent

Computer-assisted review has been judicially endorsed for over a decade. The question today is not whether — it's how.

Read an excerpt

From the introduction

The use of artificial intelligence in document review has been judicially accepted since at least 2012, when Magistrate Judge Andrew Peck endorsed Technology Assisted Review (TAR) in Da Silva Moore v. Publicis Groupe. In the years since, TAR and predictive coding have become mainstream tools, and courts have repeatedly affirmed that computer-assisted review is not only acceptable but may be superior to manual review in terms of consistency and recall.

However, the emergence of Large Language Models (LLMs) introduces new concerns that the existing TAR case law does not fully address. Unlike traditional TAR systems, which use supervised machine learning to classify documents based on attorney-coded training sets, LLMs generate natural language responses, can "hallucinate" confident but incorrect analysis, and operate through mechanisms that are difficult to explain to a skeptical judge.

FRCP Rule 26(g) requires that every discovery response be signed by an attorney who certifies that it is "complete and correct as of the time it is made." This certification attaches to a person, not to a technology. When you use AI to assist document review, the signing attorney must be able to explain and defend the methodology...

Published by
DecoverAI Research
Litigation Technology · San Mateo, CA

DecoverAI is the AI platform for legal discovery, used by litigation teams handling matters from $1M commercial disputes to $35M federal investigations. Our research is grounded in real production workflows and reviewed by practicing litigators.

Built from matters that have been to court — not theory.

Every framework in this paper has been stress-tested against real adversaries on real cases. The validation thresholds, the disclosure language, the documentation checklist — all of it was developed in matters where the methodology had to survive a Rule 37 challenge and a skeptical judge.

See defensibility built into the platform.

DecoverAI was designed so that every classification is sourced, every decision is logged, and every audit trail is generated automatically — without an extra hour of paralegal time.

Book a 30-minute demo →