Skip to main content
Enforcement Edge
February 18, 2026

The Attorney-Client-Machine Relationship: When AI Use Jeopardizes Privilege

Enforcement Edge

Artificial intelligence (AI) has moved from novelty to necessity in record time. Professionals increasingly rely on AI to enhance efficiency and productivity. But assuming AI is the same as any other productivity tool in the context of legal advice and counsel now carries additional risk. On February 10, 2026, Judge Jed Rakoff of the U.S. District Court for the Southern District of New York (SDNY) ruled from the bench that neither the attorney-client privilege nor the work product doctrine protects documents generated by an AI tool that a criminal defendant used to explore potential legal defenses. As a question of first impression nationwide (per the court’s written memorandum issued on February 17), this development carries significant implications for how companies, individuals, and their attorneys use AI.

In October 2025, the U.S. Attorney for SDNY indicted former financial executive Bradley Heppner on fraud-related charges arising from his role at a public company. After engaging counsel and learning that he was the target of a federal investigation, Heppner used the publicly available version of Anthropic’s AI tool Claude to ask questions about the government’s investigation and possible defenses. Claude generated 31 documents (the AI Documents), which Heppner subsequently shared with his lawyers. When the government later seized Heppner’s electronic devices, his counsel asserted attorney-client privilege and work product protection over the AI Documents, maintaining that certain documents were created by Heppner “for [the] purpose of obtaining legal advice.”

The government moved to challenge the defense’s assertions of privilege, and Judge Rakoff granted the motion. In its written decision, the court concluded that the attorney-client privilege did not protect the AI Documents for several reasons. First, the court noted that “the AI Documents are not communications between Heppner and his counsel” as Claude is not an attorney. Second, “the communications memorialized in the AI Documents were not confidential,” in the court’s view, given that “Heppner communicated with a third-party AI platform” whose privacy policy permits Anthropic to collect and disclose user data to third parties, including government regulatory agencies. Third, the court concluded that Heppner did not communicate with Claude for the purpose of obtaining legal advice. Although a closer question, the relevant inquiry for the court was whether Heppner sought legal advice from Claude itself, not whether Heppner later intended to share Claude’s outputs with counsel. The court noted that Claude expressly disclaims providing legal advice. The court suggested that the analysis might differ if counsel had directed Heppner to use the AI tool, in which case the platform might function as an agent of counsel.

Judge Rakoff reached the same conclusion with respect to the work product doctrine. In doing so, the court stressed that “Heppner acted on his own when he created the AI Documents,” meaning that “the AI Documents were not prepared at the behest of counsel and did not disclose counsel’s strategy.” The court also observed that while the AI Documents may have “affected” counsel’s strategy going forward, they did not “reflect” counsel’s strategy at the time they were created.

Although grounded in settled privilege law, the ruling underscores a growing challenge for companies and individuals who rely on AI tools.

Know Your Data

The court’s decision highlights the importance of understanding the terms of use for each AI tool. Consumer-grade, off-the-shelf AI may retain queries and use that data for a variety of purposes, including training the model, and may disclose the data to third parties. Enterprise tools, by comparison, may allow for additional protections through data segregation, confidentiality protections, privacy terms, and restrictions on data use. Commercial workflows might integrate multiple third-party AI models and tools, each of which places data at risk. Careful assessment and auditing of your AI tech stack are therefore crucial.

Know How AI Is Being Used

Although the Heppner decision is in the criminal context, the rationale applies more broadly, including in civil litigation, regulatory inquiries, and internal investigations. Individuals may turn to an AI tool on a host of sensitive issues, including for quick legal advice in lieu of counsel. In the corporate context, any employee who uses a non-enterprise, consumer version of a tool may be unwittingly generating discoverable records. That includes instances in which an employee inputs advice provided by a lawyer. Companies and their counsel should therefore consider establishing internal guidance on AI use, particularly for legal research, fact development, or legal strategy. Organizations should train their employees about such AI risks and consider restricting the use of consumer AI tools on workplace systems.

Stay Tuned

Although this is the first federal court decision to address AI and attorney privileges directly, it will certainly not be the last. As AI continues to grow, courts will continue to refine the contours of privilege and work product protections for each new fact pattern. But for now, remember that privilege protects communications with lawyers, not conversations with AI.

Arnold & Porter will continue to monitor legal and technological developments around AI. For help navigating the uncertainties and risks this powerful and rapidly evolving technology presents, contact any of our authors or the interdisciplinary Artificial Intelligence team.

© Arnold & Porter Kaye Scholer LLP 2026 All Rights Reserved. This Blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.