Business Law
The eDiscovery Minefiel
Artificial Intelligence Data and Electronically Stored Information
While public discourse fixates on the flashy dangers of AI like hallucinations, data theft, and bad decisions, electronically stored information (ESI) is a far more pervasive operational risk. As generative AI becomes entrenched in daily business operations, it is rapidly creating a highly discoverable, permanent records of corporate decision making that requires a radical shift in corporate risk management and eDiscovery Strategies.
This article explores the critical distinction between protected, counsel-directed AI use and discoverable, independent employee prompts; the urgent need for litigators to adapt preservation holds and discovery requests for this new type of data; and the emerging judicial split over whether AI platforms act as neutral tools or third parties that shatter attorney-client privilege and work privilege. Whatever the end result, this new ESI is poised to play an outsized role in future business litigation.
The Pre-Litigation Trap and Discoverable Evidence
As highlighted in recent guidance from K&L Gates, litigators must understand a critical distinction in how courts view generative AI data: the difference between data created at a counsel’s direction versus data created independently.
Although this is an evolving area of law, it currently appears that if an AI tool is used explicitly at the direction of legal counsel to evaluate a legal claim or assist in litigation strategy, the inputs and outputs may retain protection under the attorney-client privilege or the work-product doctrine. If used at the explicit direction of counsel, AI inputs and outputs are treated much like a timeline or chronology prepared at the direction of counsel.
However, inputs and data created independently for exploratory business or operational purposes are likely not protected.
Consider the types of independent, exploratory prompts employees and executives alike are inputting into AI chatbots everyday:
- A frustrated manager searching “Can I fire John for tardiness, and because his lunch always stinks?
- A sales representative asking: “Review this NDA. If I share this confidential client list with our new marketing agency, what are the actual odds we get sued?”
- A marketing team member prompting: “Generate a logo for our new beverage that looks extremely similar to the Coca-Cola ribbon but is just different enough to avoid trademark infringement.”
In a later lawsuit for wrongful termination, contract disputes, or IP infringement, are those prompts discoverable? What about the response from the AI? Does it matter if the prompt was put into a “public” AI chatbot that trains on user data vs. a secure enterprise level LLM? Has the business has unwittingly manufactured and handed opposing counsel a written record of their internal, unfiltered intent before litigation even commenced?
Preservation and Offensive Discovery Strategy
Unlike traditional emails that sit on corporate servers for years, AI chat histories are often ephemeral. Many AI companies automatically delete user prompts and chat histories after a set number of days (e.g., 30 days) to minimize their own data storage and privacy risks.
Litigators must adapt immediately:
- Preservation Holds: Standard preservation of evidence letters and internal litigation holds must be updated to explicitly demand the preservation of generative AI prompts, outputs, and metadata before vendor auto-delete protocols wipe them out.
- Targeted RFPs: When drafting Requests for Production (RFPs), lawyers can no longer rely on standard ESI boilerplate. Discovery requests must specifically target the verbatim prompts entered by custodians, the unedited AI-generated outputs, the specific AI platform and model version used, and the company’s internal AI acceptable use policies in effect at the time.
The Privilege Problem: An Emerging Circuit Split
When AI records are requested in discovery, courts are just beginning to grapple with the question of whether AI platforms act as neutral “tools” or as third parties that shatter confidentiality.
- Waiver of Privilege (United States v. Bradley Heppner, S.D.N.Y. Feb. 2026): The defendant independently used a consumer-grade AI (Claude) to draft legal memos based on his lawyer’s advice. The court ruled the communications were not privileged and not protected work product. Because Claude’s Terms of Service permitted data harvesting for training purposes, the court found no reasonable expectation of confidentiality. Furthermore, because the tool was used unilaterally by the client rather than at the explicit direction of counsel, protections were waived.
- Protection Maintained (Warner v. Gilbarco, E.D. Mich. Feb. 10, 2026): Reaching the opposite conclusion, a magistrate judge denied a motion to compel discovery of a party’s ChatGPT logs that contained internal legal analysis. The court ruled that AI programs are “tools, not persons.” Therefore, entering data into the AI did not constitute disclosure to a third party, keeping work-product protections intact.
Conclusion
The proliferation of generative AI introduces an inescapable new dimension to eDiscovery and corporate risk management. As illustrated by the emerging—and contradictory—case law, the line between protected legal analysis and discoverable evidence is perilously thin, often hinging on the purpose and platform used. Unfettered internal use of AI for operational queries will likely become the latest mechanism for creating admissions of intent, effectively manufacturing a roadmap for opposing counsel.
The immediate mandate for organizations and litigators is two-fold:
- For Corporations: Implement rigorous, clear AI Acceptable Use Policies that define when, how, and with what level of confidentiality different AI tools can be used. This governance must address the crucial distinction between enterprise-grade, secure LLMs and public, consumer-grade models with data harvesting ToS.
- For Litigators: Adopt a proactive stance by immediately updating preservation notices and litigation holds to explicitly encompass ephemeral AI chat logs and metadata. Discovery strategy must shift to specifically target these digital records of internal intent.
Until clearer standards emerge, companies cannot afford to treat AI prompts as casual inquiries. They must be viewed as potentially permanent, discoverable ESI that may prove or destroy a case. At this time, ensuring AI data is protected in litigation may mean proving its use was intentional, supervised, and conducted under the explicit direction of legal counsel.
Authored by Spencer K. Schneider of Schneider & Branch. Those interested in continuing the conversation my contact Spencer at sks@schneiderbranchlaw.com or 949-393-9323.

