Federal Court Holds AI Chats Are Not Privileged
Key Points
- Communications between a criminal defendant and a public AI tool were ruled not protected by attorney‑client privilege because the AI is not a lawyer, the platform's privacy policy made clear communications were not confidential, and the AI itself disclaimed giving legal advice.
- The same AI communications were also ruled not protected by work product doctrine because they were created by the defendant on his own (not by or at the direction of counsel) and did not reflect counsel's strategy or legal thought processes.
- The decision underscores that using public AI tools to draft strategy or discuss legal issues can expose that content to adversaries and sharing AI outputs with your lawyer does not retroactively make them privileged.
First-of-Its-Kind Ruling
A federal judge in the Southern District of New York has issued what appears to be the first written decision addressing whether communications with a publicly available generative AI platform are protected by attorney‑client privilege or work product doctrine. The court held they are not.
Clients should assume that anything entered into a public AI tool may be discoverable.
Background
In a criminal securities and wire fraud case, the government seized approximately 31 documents reflecting written exchanges between the defendant and generative AI platform "Claude." The AI interactions took place in 2025 after the defendant received a grand jury subpoena and was informed he was a target. On his own initiative (without instruction from his lawyers), the defendant used Claude to outline potential defense strategy and describe anticipated government charges.
Defense counsel argued the defendant had typed information learned from lawyers into Claude, created the materials in anticipation of indictment, and subsequently shared the AI exchanges with his attorneys. The government moved for a determination that the AI Documents were not privileged. The court granted the motion.
Attorney‑Client Privilege: Why AI Chats Were Not Protected
Attorney‑client privilege protects communications (1) between client and attorney, (2) intended to be and kept confidential, (3) for obtaining or providing legal advice. The AI communications failed on all three elements.
No communication with an attorney:
The defendant could not maintain that Claude is an attorney. Even analogizing AI to software does not support privilege because recognized privileges require a trusting human relationship with a professional who owes fiduciary duties.
No reasonable expectation of confidentiality:
Claude's privacy policy states that Anthropic collects user inputs and outputs, may use data for training, and may disclose data to governmental authorities and in litigation. The defendant first shared content with a third-party platform, destroying any confidentiality. The court also held that privilege could not be resurrected just because the AI interactions included information originally learned from counsel-by sharing it with the AI, any privilege was waived.
Not for obtaining legal advice:
Counsel did not instruct the defendant to use the AI, and the AI itself disclaimed giving legal advice. Non-privileged communications do not become privileged simply because they are later shared with an attorney.
Work Product Doctrine: No Protection for Self‑Directed AI Use
Work product protects materials prepared by or at the direction of counsel, in anticipation of litigation, to preserve a zone of privacy for an attorney's mental impressions and strategies. The court held the AI Documents failed to qualify because they were created solely by the defendant on his own volition-counsel did not direct or supervise their creation-and they did not reflect counsel's mental processes at the time of creation.
Practical Takeaways for Clients
- Assume that public AI platforms are not confidential.
If a platform's privacy policy allows the provider to retain and share your data with third parties, courts are likely to find no reasonable expectation of confidentiality. Treat public AI systems as you would an untrusted third party. - AI is not your lawyer.
Interacting with AI does not create an attorney-client relationship, and even discussing legal strategy with AI does not invoke privilege. - Privileged content can be waived by entering it into AI.
If you type information learned from your lawyer into a public AI tool, you risk waiving privilege over that information. - Work product centers on lawyers’ thought processes.
To qualify, materials must be prepared by or at counsel's direction and reflect counsel's strategy. Client-generated AI materials created independently may fall outside the doctrine. - Update policies and training.
Organizations should review AI platform terms before allowing use in legal or sensitive matters, implement policies restricting input of confidential or privileged information into public AI tools, and train employees on these risks. - Consider safer alternatives.
For legitimate AI use in legal-adjacent work, explore enterprise solutions offering contractual confidentiality and disabling training on customer data. Even then, involve counsel to avoid inadvertent waiver.
Conclusion
While generative AI may transform how we process information, its novelty does not displace longstanding doctrines of privilege and work product. Interactions with public AI tools should be treated as non‑privileged communications with a third party.
This information is intended to inform firm clients and friends about legal developments, including the decisions of courts and administrative bodies. Nothing in this alert should be construed as legal advice or a legal opinion. Readers should not act upon the information contained in this alert without seeking the advice of legal counsel. Views expressed are those of the authors and not necessarily this law firm or its clients.

