SDNY’s Written Opinion on AI Privilege: Further Guidance from Judge Rakoff
Good Pine P.C. | AI · Evidence Law · Litigation Strategy | New York · New Jersey
In February 2026, Judge Jed S. Rakoff of the Southern District of New York issued a written opinion explaining his ruling that documents prepared using the consumer version of the AI model Claude were not protected by attorney-client privilege or the work product doctrine. The opinion — issued on February 17, explaining a February 10 oral ruling — provides the most detailed judicial analysis to date of how these foundational evidentiary protections interact with AI tools used in legal contexts.
For businesses and executives who use AI in connection with litigation or legal matters, the opinion contains important clarifications and direct practical warnings. It is not a narrow technical ruling — it is a statement about the conditions under which privilege can and cannot survive the use of AI tools, and it sets a framework that is likely to influence courts and practitioners well beyond this case.
The Court's Decision
Judge Rakoff accepted several facts in the defendant Heppner's favor. Heppner was communicating with Claude about factual and legal issues in anticipation of litigation. He incorporated information conveyed by his counsel into his AI prompts. He intended to share the AI-generated materials with counsel, and he ultimately did so. Despite all of this, the Court rejected both the attorney-client privilege and work product claims. The favorable facts were not enough because the legal framework for privilege requires more than favorable intent — it requires a qualifying relationship and a reasonable expectation of confidentiality, neither of which was present.
Attorney-Client Privilege
No Privileged Relationship with an AI Tool
Judge Rakoff reasoned that communications with Claude cannot satisfy the foundational requirement of attorney-client privilege: a confidential communication between a client and a licensed attorney. The Court stated that all recognized privileges require "a trusting human relationship" with a licensed professional who owes fiduciary duties and is subject to professional discipline. An AI platform — regardless of how it is used or what legal research it facilitates — does not constitute that relationship. The privilege is built on accountability, and AI tools have none of the accountability structures that justify it.
No Reasonable Expectation of Confidentiality with Consumer AI
Even if the substance of the communications were otherwise privileged, the Court held that Heppner could not have had a reasonable expectation of confidentiality because he used the consumer version of Claude. The Court relied on the platform's privacy policy, which stated that user data may be used for training and reserved the right to disclose information to third parties — including governmental regulatory authorities — in connection with claims, disputes, or litigation. Having been placed on notice of those terms, Heppner could not reasonably expect that his communications would remain confidential.
The Court drew a sharp distinction between this situation and a client drafting private notes to share with counsel. The difference is that Heppner first shared the equivalent of his notes with a third party — the AI platform — before they ever reached his attorney. Confidentiality, once compromised, cannot be reconstructed after the fact.
Open Questions
The opinion does not definitively resolve whether privilege can never attach to communications made through consumer AI tools, or whether Heppner simply failed to meet his evidentiary burden. The Court acknowledged that different facts might support a different result — if a user could establish lack of awareness of the relevant privacy policy, lack of knowledge that the AI could train on or disclose the data, and a factual showing that it was extraordinarily unlikely any human would access the data. As a practical matter, however, satisfying that burden will be extremely difficult for most users of consumer AI platforms.
Enterprise AI Tools: A Potentially Different Analysis
The opinion explicitly suggests that courts may analyze enterprise AI tools differently. Enterprise versions typically do not train on user inputs, contractually commit to confidentiality, and limit disclosure absent extraordinary circumstances. Judge Rakoff cited Judge Stein's decision in In re OpenAI Inc., Copyright Infringement Litigation, which also involved consumer rather than enterprise AI. The consumer-versus-enterprise distinction appears likely to be the central dividing line in future privilege disputes involving AI.
The practical implication is direct: the type of AI license and the governing contractual terms are now legally material facts — not merely technology procurement considerations. A business that chooses a consumer AI platform over an enterprise one is making a decision with potential litigation consequences it may not appreciate until those consequences materialize in a privilege dispute.
The Kovel Doctrine and the Requirement of Counsel's Direction
Heppner argued that he used Claude to communicate more effectively with his attorney. The Court found this insufficient because Heppner was acting on his own initiative — "of his own volition" — not at counsel's direction. Claude itself disclaimed providing legal advice, which further undermined any argument that the communications were in furtherance of the attorney-client relationship.
However, the Court acknowledged that a different result might follow if counsel had directed the use of the AI tool and confidentiality were present. Under the Kovel doctrine, attorney-client privilege can extend to non-lawyer professionals engaged by counsel to assist in providing legal advice — such as accountants, financial analysts, or technical consultants. Judge Rakoff suggested that under the right circumstances, an AI tool might arguably function in that capacity. Those circumstances were not present here, but the doctrinal door was not closed. For practitioners, this signals a potentially significant avenue for structuring AI use in litigation so that privilege protection is preserved.
Work Product Doctrine
Judge Rakoff also rejected the work product claim. Although the materials were prepared in anticipation of litigation — which satisfies the threshold requirement of the doctrine — they were not prepared by or at the direction of counsel, and they did not reflect counsel's legal strategy or mental processes. The Court declined to follow a magistrate judge's decision in Shih v. Petal Card, Inc., which had extended work product protection to materials independently prepared by a client. Judge Rakoff reasoned that expanding the doctrine that far would hollow out its core purpose: protecting attorneys' mental processes, strategic thinking, and litigation preparation from discovery.
The implication for businesses is clear. AI-generated materials prepared independently — even in anticipation of litigation — will not receive work product protection unless they are prepared at counsel's direction and in furtherance of counsel's legal strategy. The key is not the timing of preparation or the purpose of the user; it is the role of the attorney in directing the work.
Practical Guidance for Businesses and Executives
This decision establishes that AI governance is a litigation risk management issue — not merely a technology policy question. Five practices are now clearly indicated for businesses that use AI in legal contexts.
First, prefer enterprise AI tools — platforms that do not train on user inputs and that contractually protect confidentiality. The consumer-versus-enterprise distinction is now legally significant. Second, act at counsel's direction — AI research conducted for litigation purposes should be clearly directed by counsel and documented as such. Third, document the context in prompts — where appropriate, prompts should reflect that the work is being performed at counsel's instruction in connection with specific litigation. Fourth, maintain careful privilege logs — logs should accurately record the basis for privilege and reflect the confidentiality expectations applicable to the AI platform used. Fifth, update internal AI use policies — companies should review and revise their AI policies specifically from a litigation risk and privilege protection perspective.
Conclusion
Judge Rakoff's opinion reinforces a foundational principle: privilege is grounded in human relationships and reasonable expectations of confidentiality. Consumer AI platforms that train on user data and reserve broad disclosure rights create substantial risk that privilege claims will fail — regardless of the user's intent or the care with which the materials were ultimately shared with counsel.
For businesses operating in New York and New Jersey, the message is direct. AI use must be integrated into legal risk strategy from the outset. Decisions about which tools to use, how they are licensed, and how AI-assisted work is documented may determine whether critical communications are protected or discoverable. Good Pine P.C. assists businesses in reviewing AI governance policies, structuring AI use in litigation contexts, and advising on privilege and work product issues arising from the use of AI tools.