We recently reported in our alert on United States v. Heppner that Judge Jed Rakoff of the Southern District of New York ruled on February 10, 2026 that documents generated by a non-lawyer through a consumer version of Anthropic's Claude AI were not protected by the attorney-client privilege or the work-product doctrine, in part because the version of the tool used potentially exposed the inputs and outputs of the tool to the third-party that operates the tool. That same day, a federal magistrate judge in the Eastern District of Michigan reached the opposite conclusion in the civil case Warner v. Gilbarco, Inc., denying a motion to compel discovery of a pro se plaintiff's use of AI tools such as ChatGPT and holding that such materials are protected under the work-product doctrine despite the third-party operator potentially having access. While the different facts in each case reveal a consistency of analytical approach—despite the seemingly opposite holdings—taken together, these decisions highlight the fact-specific nature of privilege and work-product analysis in the AI context and underscore the importance of: 1) how AI tools are used in connection with legal matters; and 2) the importance of pairing the right tool with each task.
Contrasting the Two Decisions
The Heppner and Warner decisions, issued the same day, reached opposite conclusions on the discoverability of AI-generated materials—but the factual differences are instructive. In Heppner, the criminal defendant used a consumer version of Claude AI on his own initiative, without counsel's direction, and input information he had learned from his attorneys. The platform's privacy policy permitted Anthropic to disclose user data to regulators and to use prompts and outputs for model training. Judge Rakoff found that these circumstances defeated any reasonable expectation of confidentiality and that the work-product doctrine did not apply because counsel had not directed the defendant's AI use.
By contrast, Warner involved a pro se civil litigant who was effectively acting as her own counsel. The court found that the plaintiff's use of ChatGPT to assist with her litigation was a protected work product because she was preparing materials in anticipation of litigation. Critically, the court rejected the argument that using a generative AI tool constitutes disclosure to a third party that would, by itself, waive work-product protection because it required disclosure “to an adversary or in a way likely to get in an adversary's hand.”[i]
Background
The underlying case in Warner involves employment-related claims brought pro se by plaintiff Sohyon Warner against Gilbarco, Inc. and Vontier Corporation. During discovery, the defendants sought extensive information about the plaintiff's use of third-party AI tools in connection with the lawsuit, including detailed questioning at the plaintiff's deposition. Specifically, the defendants moved to compel production of “all documents and information concerning her use of third-party AI tools in connection with this lawsuit.”[ii] The defendants further asked the court to overrule the plaintiff's attorney-client privilege and work-product objections to the AI materials, or alternatively, to require a privilege log covering such items.
The Court's Ruling
Magistrate Judge Anthony P. Patti denied the defendants' motion to compel AI-related materials, finding that such information is not discoverable under the Federal Rules of Civil Procedure. The court's reasoning rested on several grounds:
Work-Product Doctrine Applies to AI-Assisted Materials. The court held that even if information concerning AI use were otherwise discoverable, under these circumstances it is still subject to protection under the work-product doctrine. The court noted that the work-product doctrine expressly protects “documents and tangible things that are prepared in anticipation of litigation or for trial by another party or its representative.”[iii] Because the plaintiff was a pro se litigant, she had the right to assert work-product protection over such material.
No Waiver by Using ChatGPT. The defendants argued that the plaintiff waived work-product protection by using ChatGPT (presumably in this case the free version or a version that did not insulate the inputs and outputs of the tool from OpenAI, the party that provides ChatGPT). The court rejected this argument, explaining that work-product waiver requires disclosure “to an adversary or in a way likely to get in an adversary's hand.”[iv] Significantly, the court reasoned that “ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background.”[v] This stands in notable contrast to Heppner, where Judge Rakoff treated the AI platform as a third party for privilege purposes based on its terms of service permitting data disclosure.
Mental Impressions Protected. The court agreed with the plaintiff's characterization that the defendants' motion improperly sought the plaintiff's “internal analysis and mental impressions—i.e., her thought process—rather than any existing document or evidence, which is not discoverable as a matter of law.”[vi] The court also agreed that the defendants' theory “would nullify work-product protection in nearly every modern drafting environment, a result no court has endorsed.”[vii]
Relevance and Proportionality Concerns. The court characterized defendants' request as a “fishing expedition” that was “untethered from Rule 26 relevance.” [viii] Even if marginally relevant, the court found that information about AI use was not proportional to the needs of the case under Rule 26(b)(1).
No Evidence of Protective Order Violation. The court noted that defendants had “no evidence of Plaintiff having violated the protective order by uploading documents marked confidential onto an AI platform,” further undermining the basis for the discovery request.[ix]
Practical Implications
Despite the differing outcomes, the recommended steps we outlined in our prior alert remain applicable. The fact that the Warner court found work-product protection intact does not diminish the need for careful AI governance—indeed, the court's analysis turned on several factors that organizations can and should control. Accordingly, we continue to recommend the following:
- Be intentional: As we noted in our prior alert, reasonable expectations of privacy continue to be of paramount importance when determining whether a tool is suitable for use with confidential or privileged information. Ensure that your organization is conducting proper due diligence when selecting tools and determining permissible applications. Although the Warner court characterized generative AI programs as “tools, not persons” and did not scrutinize ChatGPT's terms of service, a different court—as Heppner demonstrates—may reach the opposite conclusion by focusing on the platform's privacy policy and data practices. Organizations cannot rely on receiving Warner-like treatment and should assume that platform terms will be examined.
- Audit AI usage policies: This recommendation from our prior alert is reinforced by Warner. Confirm whether your organization permits use of consumer-grade (unsecured) AI tools and make sure that only appropriate applications are allowed—for example, those that do not involve confidential or privileged information. The Warner plaintiff prevailed in part because defendants could point to no evidence that she had uploaded confidential materials in violation of a protective order. Organizations that lack visibility into how employees use AI tools may not be so fortunate.
- Implement guardrails: As we emphasized following Heppner, restrict input of privileged, confidential, or investigation-related information into consumer AI systems absent a vetted enterprise agreement and clear internal protocols. This is required by legal ethics opinions and the Warner decision does not change this calculus. The plaintiff in Warner prevailed because she was acting pro se—effectively as her own counsel—and was preparing materials in anticipation of her own litigation. Most organizational contexts do not present such facts. In Heppner, the defendant acted on his own initiative without counsel's direction, which the court found fatal to the work-product claim. Privilege-related decisions should continue to be made by those who best appreciate the risks, such as counsel. Clients and third-party consultants who handle privileged information generally should use AI tools for legal assistance within the confines of the attorney-client relationship, for example, under terms of engagement or Kovel
- Train personnel: Our prior recommendation to train personnel is only strengthened by the Warner Ensure employees understand the various considerations that go into determining whether a specific AI tool is appropriate for specific usage. Both Heppner and Warner underscore that courts will “uphold the protections afforded the thought processes and litigation strategies of both sides”—but only where appropriate safeguards are in place. The divergent outcomes in these two cases, issued the same day, illustrate how fact-specific these determinations are and why training matters.
Conclusion
The Heppner and Warner decisions, issued on the same day by different federal courts, demonstrate that courts are actively grappling with how traditional privilege and work-product doctrines apply to AI-generated materials. While Heppner reinforces the importance of using properly secured AI tools with confidential or privileged information and ensuring that AI use is directed by counsel, Warner suggests that not all AI-assisted litigation work will ultimately be subject to discovery. Indeed, despite the seemingly opposite outcomes, both decisions appear to rely on the same basic analytical framework. The critical factors appear to be the specific contractual and technical circumstances of the specific AI platform at issue, whether counsel is involved and/or directed the AI use, whether confidential or privileged information will be entered into the tool, and whether the materials reflect litigation strategy prepared in anticipation of litigation.
Organizations should take this opportunity to reassess AI governance frameworks and usage policies. As AI adoption in legal services continues to expand, we expect courts to continue scrutinizing how these tools intersect with privilege, confidentiality, and waiver doctrines.
Finally, it is worth noting that the Warner decision has significant access-to-justice implications, in this case leveling the field for a pro se litigant who would likely not otherwise have access to the type of secured tools that law firms typically (and should) employ.
The Proskauer team stands ready if you would like assistance reviewing AI usage policies, enterprise agreements, or privilege-protection protocols.
[i] Warner v. Gilbarco, Inc., No. 2:24-cv-12333, at 11 (E.D. Mich Feb. 10, 2026).
[ii] Id. at 10.
[iii] Id.
[iv] Id. at 11.
[v] Id. at 12.
[vi] Id.
[vii] Id. at 12-13.
[viii] Id. at 12.
[ix] Id. at 10, n. 3.