OpenAI CEO Sam Altman said that the company’s current and most advanced artificial intelligence model—widely regarded as the best generative AI tool to date—“kind of sucks,” and promised the world will agree when GPT-4.5 is superceded by GPT-5 when it's released in a few months.
In a recent interview with AI researcher Lex Fridman, ltman unpacked current state of artificial intelligence technology and the red-hot AI industry. In the two-hour-long, wide-ranging conversation, Altman and Fridman discussed the prospects for achieving Artificial General Intelligence (AGI), the OpenAI board coup drama, and his feelings about colleague Ilya Sutskever and competitor Elon Musk.
Altman also addressed the company’s approach to open-source technology and the importance of democratizing AI.
Here are five key takeaways from the interview that you may have missed:
1. “We will release an amazing model this year”
The highly anticipated GPT-5 update is now visible on the horizon, with Altman finally confirming that it will be released later this year—although the name of the new version is still not set. Open AI's current GPT-4.5 Turbo is arguably the best large-language model (LLM) available. However, Altman believes that GPT-5 will significantly outperform its predecessor.
"We will release an amazing new model this year—I don't know what we'll call it," he said. "We'll release over in the coming months many different things I think that'd be very cool."
Altman's expectations for the upgrade are high. He cited OpenAI's commitment to continuous improvement and its drive to push the boundaries of what AI can do. He said OpenAI's approach to deploying models has been a key factor in its success.
"Part of the reason that we deploy the way we do... we call it iterative deployment,” Altman told Fridman. “Rather than build in secret until we got all the way to GPT-5, we decided to talk about GPT 1, 2, 3, and 4," Altman said.
This strategy, he argued, allows the world to adapt to AI advancements gradually and encourages thoughtful consideration of their implications.
2. “I think it kind of sucks”
Sam Altman's assessment of GPT-4 might be a surprise, considering that the model is currently considered the best in the field. However, Altman has the perspective of someone who knows what's coming next. GPT-4 "kind of sucks" eompared to how much better he thinks the new LLM will be.
“I think it (GPT-4) is an amazing thing, but it's relative to where we need to get to and where I believe we will get to," he said.
GPT-4 has undoubtedly made impressive strides in various applications, from natural language processing to image generation to coding. But Altman's expectations for GPT-5 are even higher —even though he wasn’t too specific about what that will look like.
“I expect that the delta between five and four will be the same as between 4 and 3, " he said. "I don't want to downplay the acomplishment of GPT-4, but I don't want to overstate it either. I think this point that we are on an exponential curve—we will look back relatively soon at GPT-4 like we look back at GPT-3 now.”
When GPT-3 came out, the entire AI space—and the tech industry in general—reacted with shock. Many said it was revolutionary, and some immediately declared that it meant AGI was imminent. The hype barely subsided, but now that GPT-4 has been around for one year, the answers and capabilities of GPT-3 are comparably awful.
A new model would have to be pretty powerful to make GPT-4 look like a poor performer—and that is exactly what Altman is promising in 2024.
Those who want to know about the mysterious Q* project will have to keep waiting and speculating, however.
"We are not ready to talk about that," Altman said, although he hinted that OpenAI is heavily interested in making its model reason better.
3. “We'll return to work on developing robots”
Altman also reflected on the mad scramble to achieve AGI, saying that “the road to AGI should be a giant power struggle—not ‘should,’ I expect that to be the case.”
At Open AI, he said, the journey will involve hardware as well as software.
“We'll return to work on developing robots,” he said. "I think it's sort of depressing if we have AGI and the only way to get things done in the physical world is to make a human go do it—I really hope that as part of this transition, we also get humanoid robots of some sort."
The discussion suggests OpenAI sees the potential of combining AI with physical systems to create more versatile and capable machines.
Rumors have been circulating that Altman has been in conversations to launch a hardware startup focused on building custom chips for AI applications. This potential venture could complement OpenAI's renewed focus on robotics, providing the necessary hardware infrastructure to support the development of advanced humanoid robots.
Altman’s vision hopefully skews more toward Wall-E and less like the Terminator.
4. “I love Ilya—I have tremendous respect for Ilya”
Fridman prompted Altman to reflect on the dramatic board coup last year, which he described as “definitely the most painful professional experience of my life, and chaotic. and shameful. and upsetting, and a bunch of other negative things.
“There were great things about it, too,” he added.
Altman dispelled rumors of tension between him and OpenAI researcher and former board member Ilya Sutskever, who was characterized as instrumental in the board's dramatic action in November. Almost 90% of the company threatened to resign, Altman was ultimately reinstated as CEO and Sutskever later apologized for his actions. He hasn't been heard from since.
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
— Ilya Sutskever (@ilyasut) November 20, 2023
"I love Ilya, I have tremendous respect for Ilya," he said. “I don't know if I can say anything about his plans right now—that's a question for him— but I really hope we work together for certain the rest of my career.”
Altman's praise for Sutskever included his focus on safe AI development.
“One of the many things that I really love about Ilya is he takes AGI and the safety concerns—broadly speaking, including things like the impact this is going to have on society—very seriously," he said, "So Ilya has not seen AGI, but Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right.”
5. “I would definitely pick a different name”
Altman addressed criticisms of OpenAI, its decision not to release its models as open-source software, and its transition from a non-profit to a for-profit company.
Altman admitted that the company's name might not have been the best choice given how the company has evolved.
“I would definitely pick a different name," he said. “I think we should open source some stuff and not other stuff—it does become this religious battle line where nuance is hard to have, but I think nuance is the right answer.”
The separation came to a head when Tesla CEO Elon Musk sued Open AI for being “Closed AI,” and Fridman pressed Altman to provide context around the dispute.
"He thought OpenAI was going to fail, he wanted total control to turn it around, we wanted to keep going in the direction that now has become OpenAI,” Altman explained. “He also wanted Tesla to be able to build an AGI effort; at various times, he wanted to make OpenAI into a for profit company that he could have control over or have it merged with Tesla.
“We didn't want to do that, and he decided to leave, which is fine,” Altman continued. He pointed out that Musk only announced that his own AI model, Grok, would be open source after his attack on Altman's company was deemed hypocritical by the community.
”I don't think open source versus not as what this is really about for him,” Altman said. “I think this whole thing is unbecoming of a builder, and I respect Elon as one of the great builders of our time... it makes me extra sad he's doing it.”
Altman emphasized the importance of democratizing AI, which he suggests justifies the “open” in his company’s name.
“One of the things that I think Open AI is doing is putting powerful technology in the hands of people for free,” he said. “As a public good we don't run ads on our free version, we don't monetize it in other ways.”
Altman also shared advice for other founders.
"I would heavily discourage any startup that was thinking about starting as a nonprofit and adding a for profit arm later—I'd happily discourage them from doing that,” he said. “I don't think we'll set a precedent here.”
Edited by Ryan Ozawa.