In brief

  • A federal court ruled AI chats lack legal privilege because Claude holds no law license.
  • Now, big law firms are adapting their strategies accordingly, even though legal views are conflicting.
  • Enterprise AI tools, used under attorney direction, may still qualify for protection.

Two months ago, a federal judge in New York ruled that a fraud defendant's private conversations with Anthropic's Claude were fair game for prosecutors. Now, the legal industry is still processing what that means—and it is doing so fast.

More than a dozen major U.S. law firms have since issued client advisories warning that conversations with AI chatbots like Claude and ChatGPT carry no legal protection when they touch legal matters. Some have gone further: firms are now embedding that warning directly into the contracts they sign with clients before representation even begins.

According to Reuters, New York firm Sher Tremonte—which regularly represents white-collar criminal defendants—added language to a March engagement agreement stating that "disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege." It is believed to be among the first firms to translate a court ruling into a formal contractual obligation for clients.

"We are telling our clients: You should proceed with caution here," Alexandria Gutiérrez Swette, a lawyer at New York-based Kobre & Kim, told Reuters.

Other firms are now racing to set guardrails. Reuters reports that O'Melveny & Myers and others have told clients to use only "closed," enterprise-grade AI systems, acknowledging that even enterprise AI remains largely untested in court on this question.

Debevoise & Plimpton went a step further with tactical advice: If a lawyer specifically directs a client to use an AI tool, the client should say so inside the chatbot prompt itself. The firm suggested writing "I am doing this research at the direction of counsel for X litigation." The idea seems to be setting the conditions to invoke the Kovel doctrine, which can extend attorney-client privilege to non-lawyers working as an attorney's agent.

The ruling that shook the practice

The urgency traces back to United States v. Heppner, decided in February by Judge Jed Rakoff of the Southern District of New York. Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings, had been indicted on five federal counts, including securities fraud and wire fraud. After receiving a grand jury subpoena, he used Anthropic's Claude on his own to map out his defense—generating 31 documents the FBI later seized from his home.

Judge Rakoff ruled those documents could not be shielded for three reasons: Claude is not an attorney, Anthropic's own privacy policy reserves the right to share user data with third parties including government regulators, and Heppner acted independently rather than at his lawyers' direction. No attorney-client relationship "could exist," the judge wrote, "between an AI user and a platform such as Claude."

The ruling landed as a first-of-its-kind written opinion on AI and attorney-client privilege in the United States. It also landed as a wake-up call for a profession that had been quietly watching clients turn to chatbots for legal guidance without considering what happens when those conversations end up in a courtroom.

Rakoff himself left that door open. He noted during the Heppner hearing that had counsel directed the defendant to use Claude, the AI "might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege." That line is now something of a lifeline for firms designing new AI protocols.

The court landscape is not entirely settled. For example, in Warner v. Gilbarco, a court ruled that a self-represented plaintiff's ChatGPT conversations were protected as work product, because AI tools are "tools, not persons" and sharing information with software is not the same as disclosing it to an adversary.

A Colorado court reinforced that logic on March 30 in Morgan v. V2X, also protecting a pro se litigant's AI work product, though it went further by ordering the plaintiff to disclose which AI tool he used and barring confidential discovery materials from being fed into platforms that allow data training.

The pattern is taking shape: If you're a represented party who decided on your own to use a consumer AI chatbot, you're exposed. If you're representing yourself in a civil case, you may have more cover. The difference between those two scenarios is now one of the sharper fault lines in U.S. evidence law.

Justin Ellis of MoloLamken told Reuters that more rulings will eventually clarify when AI chats can be used as evidence. Until then, the legal profession's version of that clarity is showing up in engagement letters and client emails, and in advice that would have seemed strange two years ago: think carefully about what you type into a chatbot, because someone else may read it.

The Los Angeles Superior Court is separately piloting AI tools for judges to handle case summaries and draft rulings—the same technology entering legal workflows from the bench while lawyers scramble to manage it from the client side. Decrypt has also previously covered privacy-focused AI alternatives that avoid centralizing conversation data, a product category whose pitch just got a significant real-world test case.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.