Dr. Geoffrey Hinton, widely considered the “Godfather of AI,” says he is particularly proud of former student Ilya Sutskever for firing OpenAI CEO Sam Altman in 2023.
Sutskever was one of several OpenAI board members who led a coup against Altman in 2023, ousting him from the company. Pressure, from both inside and outside the company, ultimately led to Altman’s return, with Sutskever eventually leaving himself.
At the time of Altman’s ouster, reports indicated that Sutskever and the other board members were concerned that Altman was straying too far from OpenAI’s primary goal of safe AI development. The board felt Altman was pursuing profit at the expense of safety, a narrative that has been repeated by other executives who have left the company in recent months.
Hinton is the latest to lend weight those concerns. In a video post following his Nobel Prize win, Hinton touted the students he had over the years, particularly calling out Sutskever.
“I’d also like to acknowledge my students,” Hinton says in the video. “I was particularly fortunate to have many very clever students, much clever than me, who actually made things work. They’ve gone on to do great things.
“I’m particularly proud of the fact that one of my students fired Sam Altman, and I think I better leave it there and leave it for questions.”
Hinton then goes on to describe why Sutskever was involved in firing Altman.
“So OpenAI was set up with a big emphasis on safety,” he continues. “Its primary objective was to develop artificial general intelligence and ensure that it was safe.
“One of my former students Ilya Sutskever, was the chief scientist. And over time, it turned out that Sam Altman. Was much less concerned with safety than with profits. And I think that’s unfortunate.”
Hinton has long been a vocal advocate for need to develop AI with safety concerns front and center. He previously worked on AI at Google, before leaving the company and sounding the alarm over its rushed efforts to catch up with OpenAI and Microsoft.
Since leaving Google, Hinton has warned of the danger AI poses, saying efforts need to be taken to ensure it doesn’t gain the upper hand.
“The idea that this stuff could actually get smarter than people — a few people believed that,” Dr. Hinton said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.
“I don’t think they should scale this up more until they have understood whether they can control it,” he added.