‘Godfather of AI’ Revises His Odds of AI Destroying Humanity

Professor Geoffrey Hinton, considered the "godfather of AI," has revised his odds for risk AI poses to humanity—and it's not good news for humans....
‘Godfather of AI’ Revises His Odds of AI Destroying Humanity
Written by Matt Milano

Professor Geoffrey Hinton, considered the “godfather of AI,” has revised his odds for risk AI poses to humanity—and it’s not good news for humans.

According to The Guardian, Hinton made his comments on BBC Radio 4’s Today program. Hinton has previously said that he believed there was a 10% chance of AI wiping out humanity in the next 30 years. The Today host asked if his estimate had changed.

“Not really, 10 to 20 [per cent],” Hinton replied.

The host pointed out that his response was different from his previous one, now citing as high as a 20% chance of AI destroying humanity.

“If anything,” Hinton acknowledged. “You see, we’ve never had to deal with things more intelligent than ourselves before.

“And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”

“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he added.

Hinton’s Vocal Criticism of AI Development

Hinton has been a vocal critic of AI development, resigning from his position at Google to sound the alarm regarding AI.

“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said at the time. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

“I don’t think they should scale this up more until they have understood whether they can control it,” he added.

The OpenAI Affair

Hinton was also proud of the fact that his former student, Ilya Sutskever, was one of the individuals who led the boardroom coup against OpenAI CEO Sam Altman, ousting him over concerns about safe AI development.

“I’d also like to acknowledge my students,” Hinton said in the video in October 2024. “I was particularly fortunate to have many very clever students, much clever than me, who actually made things work. They’ve gone on to do great things.

“I’m particularly proud of the fact that one of my students fired Sam Altman, and I think I better leave it there and leave it for questions.”

Hinton then went on to discuss the reasons behind Sutskever’s actions, specifically in the context of AI safety.

“So OpenAI was set up with a big emphasis on safety,” he said. “Its primary objective was to develop artificial general intelligence and ensure that it was safe.

“One of my former students Ilya Sutskever, was the chief scientist. And over time, it turned out that Sam Altman. Was much less concerned with safety than with profits. And I think that’s unfortunate.”

Given his history and credentials, when Hinton revises his odds on the risk AI poses, tech leaders and lawmakers would do well to take notice.

Subscribe for Updates

GenAIPro Newsletter

News, updates and trends in generative AI for the Tech and AI leaders and architects.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.
Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us