In brief

  • Anthropic added passport and selfie verification for Claude—no other major AI chatbot currently requires the same.
  • The move comes weeks after millions joined Claude specifically over OpenAI's surveillance deal.
  • Verification data goes to Persona's servers, not Anthropic's, and won't be used to train models.

Anthropic quietly published identity verification requirements for Claude this week, asking certain users to hand over a government-issued photo ID and a live selfie. Something its competitors don’t require.

“We are rolling out identity verification for a few use cases, and you might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures,” Anthropic said. “We only use your verification data to confirm who you are and not for any other purposes.”

Millions of users fled OpenAI for Anthropic in February after OpenAI signed a deal to deploy AI on Pentagon classified networks—a contract Anthropic turned down over concerns about mass surveillance and autonomous weapons. Daily signups broke records, and free users were up 60% since January, Anthropic said at the time. The privacy-conscious crowd had found its home.

That crowd, it seems, may now have some documents to prepare if it wants to continue using Claude. The reactions so far have been quite negative, pointing out that it’s a deliberate decision and not a regulation or a mandatory order imposed by a government on Anthropic as a service provider.

According to the help center page, which went live on April 14, Anthropic selected Persona Identities as its verification partner—the same KYC infrastructure used across financial services—and requires a physical, undamaged passport, driver's license, or national identity card. Photocopies, mobile IDs, and student credentials don't count. A live selfie may also be required.

The policy isn't universal yet. Verification will trigger when accessing "certain capabilities," during "routine platform integrity checks," or as part of safety and compliance measures. Anthropic hasn't said publicly which features are gated, or what user behavior might prompt a check. The company did not immediately respond to Decrypt's request for additional details.

On data handling, Anthropic draws a careful line: your ID and selfie go to Persona's servers, not Anthropic's own systems. The company says it is the data controller setting the terms, and that Persona can use the information to verify identity and improve fraud detection. The data is encrypted in transit and at rest, excluded from model training, and won't be shared with third parties for marketing, something Anthropic has been careful to promise since its earliest commercial policies.

Careful promises, though, have a history of meeting careless infrastructure. An October 2025 breach at Discord exposed roughly 70,000 government IDs users had submitted for age verification. Persona is a serious player in this space, but third-party custody of government documents has demonstrated repeatedly that no third party is immune.

Tighter identity controls also fit a pattern Anthropic has been building toward. In December, the company announced classifiers to detect users who self-identify as minors. Multiple adult users had their accounts suspended anyway, reporting that entire project histories were wiped while they tried to appeal incorrect flags.

Accounts registered from regions Anthropic doesn't formally serve are also subject to bans—a detail that lands hardest on Chinese users accessing Claude through intermediaries, since a live selfie matched against a physical government document is hard to fake your way through.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.