Buyer's Guide

What a Modern eDiscovery Platform Should Actually Do: A 2026 Buyer's Checklist

Forty capabilities every modern eDiscovery platform should ship — from ingestion to production, anchored to Victor Stanley on technology competence and FRCP Rule 26(g)'s certification mandate.

April 6, 2026

Why "Reasonable Inquiry" Is the Test That Matters

Every discovery response you sign carries a personal certification under Federal Rule of Civil Procedure 26(g). The signature is not a formality. It is a representation that, "to the best of the person's knowledge, information, and belief formed after a reasonable inquiry," the disclosure is complete and correct and the response is consistent with the rules and not unreasonably burdensome. Rule 26(g)(3) makes sanctions mandatory when that certification is made without substantial justification. The platform you use to run discovery is not separate from that certification. It is your reasonable inquiry.

Magistrate Judge Paul Grimm built the modern reading of that obligation in two Maryland decisions that every buyer of eDiscovery technology should be able to quote from memory. In Victor Stanley, Inc. v. Creative Pipe, Inc., 250 F.R.D. 251, 261–62 (D. Md. 2008), Judge Grimm held that a producing party that uses keyword search to identify privileged documents bears the burden of proving its methodology was reasonable — documented keyword selection, quality assurance, sampling, and validation. The defendants in Victor Stanley I could not meet that burden, and they waived privilege over 165 documents as a result. Three months later, in Mancia v. Mayflower Textile Servs. Co., 253 F.R.D. 354, 357–58 (D. Md. 2008), Judge Grimm held that "the very act of making" boilerplate objections is "prima facie evidence of a Rule 26(g) violation" because it shows counsel did not actually pause to discover the facts.

Read together, those two opinions set the bar a 2026 eDiscovery platform has to clear. Your platform must be able to document its own search methodology, expose its validation results, and let you defend — on the record, in front of a magistrate — the choices it made on your behalf. A platform that hides its model behind opaque AI, that cannot tell you which documents it scored and why, or that ships boilerplate workflows in place of defensible ones is a Rule 26(g) liability. The eight categories below are the capabilities every modern platform should ship to keep you on the right side of that line.

Ingestion and Processing: The Boring Stuff That Determines Defensibility

Defensibility starts at intake. Before a single document is reviewed, the platform has to ingest the data without quietly destroying its evidentiary value. That means broad load-file compatibility (Concordance DAT, Relativity OPT, EDRM XML), full metadata preservation at the field level, and the ability to ingest forensic images, mobile collections, modern collaboration tools (Slack, Teams, Google Workspace), and the long tail of cloud sources without dropping custodial fields or timestamps. If the load-file inventory does not match the chain-of-custody log on the way in, the platform has already failed.

Processing is where most legacy platforms still cut corners. The non-negotiable list: container expansion (PST, OST, ZIP, NSF, MBOX) with parent-child relationships preserved; OCR for image-only files with confidence scoring; deduplication at both the global and custodial level with documented hash rules; near-duplicate identification with adjustable similarity thresholds; and email threading that surfaces the inclusive message and the unique branches. Each of these is a documented step the platform should expose — not a black box that produces a number at the end.

The reason this matters is the holding in Victor Stanley I. The defendants there could not reconstruct, after the fact, exactly what their search methodology had done. Victor Stanley, 250 F.R.D. at 260–62. A modern platform must produce, on demand, a complete processing report: counts in, counts out, exception lists, OCR confidence distributions, dedup ratios, and the parameters used for each. If you cannot generate that report in two clicks, you cannot certify a Rule 26(g) "reasonable inquiry" on the work the platform did before you ever opened the review interface.

Buyers should also test the failure modes. Ask the vendor: what happens to a corrupted PST? What happens to a Mandarin-language scan with mixed handwriting? What happens to a Slack export with edited messages? The honest answer is "it goes on the exception list, and here is how you triage it." A vendor that cannot describe its exception workflow is a vendor whose exception workflow is "the litigation support team handles it manually," and that is the line item that quietly turns a $40,000 matter into a $120,000 invoice.

Review and AI: What "AI-Augmented" Should Actually Mean

"AI-powered" has become the most overloaded phrase in the eDiscovery RFP. Three things are now sold under the same banner: TAR 1.0 (one-shot supervised classification trained on a control set), TAR 2.0 (continuous active learning, where the model retrains as the reviewer codes), and modern LLM classifiers (large language models that do zero-shot or few-shot relevance and privilege classification using natural-language instructions). All three are legitimate. None of them are interchangeable, and a platform that conflates them is selling you a marketing slide, not a methodology.

The capability checklist for any of the three is the same. The model has to be explainable at the document level: when it scores a document as relevant, you should be able to see which passages drove the score and why. It has to support reviewer feedback loops so coding decisions update the model. It has to expose validation metrics — precision, recall, F1, and the size and composition of the validation set — in a form a magistrate would accept. And it has to support a documented statistical sampling protocol on the null set so you can defend the recall claim on the record.

This is the direct line to Victor Stanley I. Judge Grimm did not hold that keyword search was inherently improper; he held that the producing party "bears the burden of proving" the search methodology was reasonable, and that the defendants had not documented their selection, run quality assurance, or sampled the results. Victor Stanley, 250 F.R.D. at 262. The same logic applies to AI classifiers. A platform whose AI cannot produce a validation report, cannot show you its sampling methodology, and cannot explain its individual scoring decisions is a platform that cannot help you carry the burden Judge Grimm placed on the producing party.

The pass/fail test is simple. Ask the vendor: "If opposing counsel challenges our AI workflow at a Rule 26(f) conference next week, can you give me — today — a written description of the model, the training methodology, the validation set, the precision and recall numbers, and the sampling protocol used to estimate them?" If the answer involves a meeting with the data science team next quarter, the platform is not ready for litigation.

Privilege, Redaction, and Production: The Output Stack

Most platforms compete on the review interface and quietly hand off the output stack to a litigation support team. That is exactly where the cost overruns and the defensibility failures show up. A modern platform should ship a complete output stack: privilege log generation with auto-drafted descriptions that the reviewer approves rather than types; defensible redaction with automatic PII and PHI detection plus a full audit trail of who redacted what and when; Bates numbering with custom prefixes and configurable formats; and production load-file output in every format opposing counsel might request (Concordance, Relativity, EDRM, native + image hybrid).

Privilege log generation is the most under-specified capability in most RFPs, and it is the capability that maps most directly to the Mancia rule. Boilerplate privilege log entries that recite "attorney-client communication regarding legal advice" without document-specific facts are functionally equivalent to the boilerplate objections Judge Grimm called "prima facie evidence of a Rule 26(g) violation." Mancia, 253 F.R.D. at 358. A platform that auto-drafts entries from the document content — with the actual sender, recipient, subject, and a fact-specific description — lets you sign the log under Rule 26(g) without lying about whether you conducted a reasonable inquiry into each entry.

Redaction defensibility is a separate but related test. The platform should support burned-in redactions that survive any export (the only kind a court will accept), automatic detection of common PII and PHI patterns, manual override for context-dependent redactions, and a redaction log that records every action by user and timestamp. The single most common defect in legacy platforms is "soft" redactions that are visible only in the review interface and disappear when the document is produced as a native file. Test it. Produce a redacted document, open the native, and look for the underlying text.

Security, Compliance, and Pricing: The Procurement Trifecta

Security and compliance are the categories where buyers ask the most questions and verify the fewest answers. The minimum 2026 baseline: SOC 2 Type II certification (not Type I, not "in progress"), HIPAA compliance with a signed BAA available on request, encryption at rest and in transit with documented key management, single sign-on with SAML/OIDC support, granular role-based access controls, and a complete audit log of every user action. Ask for the SOC 2 report under NDA and read it. Look for the exception list. A vendor that hands you a one-page badge instead of the report is a vendor that cannot show you its exceptions.

Data residency is the question every in-house team should ask and almost none do. Where, physically, will my data sit? Is it single-tenant or multi-tenant? If multi-tenant, what is the logical isolation model? Can you guarantee data does not cross a jurisdictional boundary? For matters touching EU personal data, UK SARs, or HIPAA-protected health information, the answer to those questions is the answer to whether the platform is usable at all. DecoverAI's security page publishes the residency options, the certification status, and the BAA workflow up front, and we will ship the SOC 2 report under NDA on the same call you book.

Pricing transparency is the third leg of the procurement trifecta and the one most often used as a hidden cost vector. The pass criteria are concrete: published rates, no per-seat fees, no contract minimums, no egress fees, and a single all-in number that includes processing, hosting, AI classification, privilege log, redaction, and production. DecoverAI is $60/GB/month, all-in, no contracts. A vendor whose pricing model requires a custom quote for a hypothetical 100 GB matter is a vendor who builds the bill on the assumption that no one will compare the line items.

Get the 40-criterion vendor scorecard
A 12-page white paper with the full scorecard, weighted scoring, and citation appendix — built for procurement teams.

How to Use This Checklist in a Real RFP

The eight categories above are the structure of the bespoke 40-criterion scorecard in the white paper. The way to actually use it in a procurement cycle is to weight the categories against the matters you actually run. A regulatory investigation team that handles HIPAA-protected data should weight Security and Compliance higher; a high-volume commercial litigation team should weight Ingestion and AI higher; an internal investigations group should weight Privilege and Redaction higher. Score each candidate vendor 1–5 on each criterion, multiply by the category weight, and look at the spread — not just the totals.

The cooperation duty Judge Grimm articulated in Mancia cuts in the buyer's favor here too. The court there reasoned that Rule 26(g) was designed to make counsel "stop and think" before signing, and that the rule is satisfied only when the inquiry is "reasonable under the circumstances." Mancia, 253 F.R.D. at 357–58. A documented vendor evaluation — criteria, scores, weights, and a final decision memo — is the procurement equivalent of that "stop and think" requirement. It is also the artifact opposing counsel will not be able to challenge if your platform choice is ever raised at a Rule 26(f) conference.

The four DecoverAI alternatives pages walk through how the scorecard plays out against the dominant legacy platforms: Relativity, Everlaw, CS Disco, and Logikcull. Each one is structured around the same eight capability categories so you can compare like to like. For the underlying product capabilities, relevance detection, privilege log generation, redaction, and production setup have their own dedicated pages.

The companion piece to this article, The Real Cost of Document Review: A 2026 Pricing Benchmark, walks through how the same scorecard categories translate into line-item dollars on a benchmark 100 GB matter. Read together, the two articles give you the capability framework and the pricing math you need to defend a platform decision — to your CFO, to your general counsel, and, if it comes to it, to a magistrate judge applying Victor Stanley and Rule 26(g) to the work your vendor did on your behalf.

Run your next RFP with the scorecard.

Download the 12-page white paper, or see DecoverAI scored against your current platform in a 30-minute demo.

Book a Demo →
Download the buyer's checklist
The full 12-page white paper with the 40-criterion vendor scorecard and citation appendix — no demo required.
No spam. Unsubscribe anytime.