In brief
- Altman believes ChatGPT now outpaces any human who has ever lived.
- Altman referred to this moment as an “event horizon” as AI approaches superintelligence.
- ChatGPT now has 800 million weekly users, who Altman said rely on the technology.
Humanity may already be entering the early stages of the singularity, the point at which AI surpasses human intelligence, according to OpenAI CEO Sam Altman. In a blog post published Tuesday, Altman said humanity has crossed a critical inflection point—an “event horizon”—marking the beginning of a new era of digital superintelligence.
“We are past the event horizon; the takeoff has started,” he wrote. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
Altman’s analysis comes at a time when leading AI developers warn that artificial general intelligence could soon displace workers and disrupt global economies, outpacing the ability of governments and institutions to respond.
— Sam Altman (@sama) June 10, 2025
The singularity is a theoretical point when artificial intelligence surpasses human intelligence, leading to rapid, unpredictable technological growth and potentially profound changes in society. An event horizon is a point of no return, beyond which the course of the object, in this case, an AI, cannot be changed.
Altman argued that we’re already entering a “gentle singularity”—a gradual, manageable transition toward powerful digital superintelligence, not a sudden wrenching change. The takeoff has begun, but remains comprehensible and beneficial.
As evidence of that, Altman pointed to the surge in ChatGPT’s popularity since its public launch in 2022: “Hundreds of millions of people rely on it every day and for increasingly important tasks,” he said.
The numbers back him up. In May 2025, ChatGPT reportedly had 800 million weekly active users. Despite ongoing legal battles with authors and media outlets, as well as calls for pauses on AI development, OpenAI shows no signs of slowing down.
Altman emphasized that even slight improvements in the technology could deliver substantial benefits. But a small misalignment, scaled across hundreds of millions of users, could have serious consequences.
To solve for these misalignments, he suggested several points, including:
- Ensure AI systems act in line with humanity’s long-term goals, not just short-term impulses.
- Avoid concentrated control by any one person, company, or country.
- Start global discussions now on what values and limits should guide the development of powerful AI.
Altman said the next five years are critical for AI development.
“2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same,” he said. “2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world.”
By 2030, Altman predicted, both intelligence and the capacity to generate and act on ideas will be widely available.
“Already, we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it,” he said, pointing out how quickly people shift from being impressed by AI to expecting it.
As the world anticipates the rise of artificial general intelligence and the singularity, Altman believes the most astonishing breakthroughs won’t feel like revolutions—they’ll feel ordinary, and the bare minimum AI players need to offer to enter the market.
“This is how the singularity goes: wonders become routine, and then table stakes,” he said.
Edited by Josh Quittner