SDNY’s Written Opinion on AI Privilege: Further Guidance from Judge Rakoff
Previously we wrote about a decision by Judge Jed S. Rakoff of the Southern District of New York denying defendant Bradley Heppner’s claim that documents he prepared using the consumer version of the AI model Claude for legal research were protected by attorney-client privilege or the work product doctrine.
On February 17, 2026, Judge Rakoff issued a written opinion explaining the reasoning behind his February 10 ruling. The opinion provides important clarification—and practical warnings—for businesses and individuals using AI tools in connection with litigation.
The Court’s Decision
Judge Rakoff accepted several facts in Heppner’s favor:
He was communicating with Claude about factual and legal issues in anticipation of litigation.
He incorporated information conveyed by his counsel into his AI prompts.
He intended to share the AI-generated materials with counsel.
He ultimately did share those materials with counsel.
Despite these facts, the Court rejected both the attorney-client privilege and work product claims.
Attorney-Client Privilege
No Privileged Relationship with an AI Tool
Judge Rakoff reasoned that communications with Claude cannot satisfy the foundational requirement of attorney-client privilege: a confidential communication between a client and a licensed attorney.
The Court emphasized that recognized privileges require “a trusting human relationship” with a licensed professional who owes fiduciary duties and is subject to discipline. An AI platform—even one used for legal research—does not meet that definition.
No Reasonable Expectation of Confidentiality (Consumer AI)
Even if the substance of the communications were otherwise privileged, the Court held that Heppner lacked a reasonable expectation of confidentiality because he used the consumer version of Claude.
The Court relied heavily on the platform’s privacy policy, which:
States that user data may be used for training; and
Reserves the right to disclose information to “third parties,” including governmental regulatory authorities, even in connection with claims, disputes, or litigation.
Because users were put on notice that disclosures could occur, the Court concluded that Heppner could not reasonably expect confidentiality. The Court distinguished this situation from a client drafting private notes to share with counsel: Heppner first shared the equivalent of his notes with a third party—the AI platform.
Open Questions
The opinion leaves unresolved whether privilege can never attach to communications made through consumer AI tools—or whether Heppner simply failed to meet his evidentiary burden.
The Court did not definitively foreclose the possibility that privilege might survive if a user could establish:
Lack of awareness of the relevant privacy policy;
Lack of knowledge that the AI could train on or disclose the data; and
As a factual matter, that it was extraordinarily unlikely any human would access the data.
Practically speaking, however, satisfying that burden may be extremely difficult.
Enterprise AI Tools: A Potentially Different Analysis
The opinion suggests that courts may view enterprise AI tools differently.
Enterprise versions typically:
Do not train on user inputs;
Contractually commit to confidentiality; and
Limit disclosure absent extraordinary circumstances.
Judge Rakoff cited Judge Stein’s decision in In re OpenAI Inc., Copyright Infringement Litigation, which likewise involved consumer—not enterprise—AI tools. The distinction may prove critical in future cases.
For businesses, the takeaway is clear: the type of AI license and governing terms matter.
The Kovel Doctrine and Direction of Counsel
Heppner argued that he was using Claude in order to communicate more effectively with his lawyer. Judge Rakoff found that insufficient because Heppner was acting “of his own volition,” not at counsel’s direction. Claude itself disclaimed providing legal advice.
However, the Court acknowledged that if counsel had directed the use of Claude, and if confidentiality were present, the Kovel doctrine might apply. Under that doctrine, attorney-client privilege can extend to non-lawyer professionals retained to assist counsel—such as accountants or consultants.
Judge Rakoff suggested that, under the right circumstances, an AI tool might arguably function like a professional agent assisting counsel. But those circumstances were not present here.
Work Product Doctrine
Judge Rakoff also rejected the work product claim.
Although the materials may have been prepared in anticipation of litigation, they were:
Not prepared by or at the direction of counsel; and
Did not reflect counsel’s legal strategy.
The Court declined to follow a magistrate judge’s decision in Shih v. Petal Card, Inc., which had extended work product protection to materials prepared by a client without attorney direction. Judge Rakoff reasoned that expanding protection that far would undermine the core purpose of the doctrine—to protect lawyers’ mental processes.
Practical Guidance for Businesses and Executives
This decision underscores that AI governance is now a litigation risk issue—not merely a technology policy issue.
When AI is used in connection with legal matters:
Prefer enterprise AI tools.
Use platforms that do not train on user inputs and that contractually protect confidentiality.Act at the direction of counsel.
If AI research is being conducted for litigation, ensure that it is clearly done at counsel’s instruction.Document the context in prompts.
Where appropriate, prompts should reflect that the work is being performed at the direction of counsel in connection with specific litigation.Maintain careful privilege logs.
Logs should accurately state the basis for privilege and reflect that the AI tool was used with an expectation of confidentiality.Update internal AI policies.
Companies should revisit internal AI use policies to ensure they align with litigation risk management and privilege protection.
Conclusion
Judge Rakoff’s written opinion reinforces a central principle: privilege is grounded in human relationships and reasonable expectations of confidentiality.
Consumer AI platforms—particularly those that train on user data and reserve broad disclosure rights—create substantial risk that privilege claims will fail.
For businesses operating in New York and New Jersey, the message is direct: AI use must be integrated into your legal risk strategy. Decisions about which tools to use, how they are licensed, and how they are documented may determine whether critical communications remain protected—or become discoverable.
Disclaimer
This article is provided for general informational purposes only and does not constitute legal advice. Reading this article does not create an attorney-client relationship with Good Pine P.C. The legal analysis of privilege and work product protections depends on specific facts, governing law, and the particular AI platform and contractual terms at issue. Businesses and individuals should consult qualified counsel regarding their own circumstances before relying on any AI tool in connection with legal matters.