‘Godfather of AI’ Backs Lawsuit to Prevent OpenAI’s For-Profit Transition

The "Godfather of AI," Geoffrey Hinton, has thrown his weight behind Elon Musk's legal challenge to OpenAI's plans to transition to a for-profit....
‘Godfather of AI’ Backs Lawsuit to Prevent OpenAI’s For-Profit Transition
Written by Matt Milano

The “Godfather of AI,” Geoffrey Hinton, has thrown his weight behind Elon Musk’s legal challenge to OpenAI’s plans to transition to a for-profit.

OpenAI announced plans to transition to a for-profit, even securing billions in additional funding on the condition that it could make the transition within two years. Unfortunately for the company, Musk has filed a lawsuit challenging the decision, as well as filed an injunction to prevent the company form moving forward until the court can make a final decision.

Musk is one of the individuals who co-founded OpenAI, making sizable cash contributions to help the company get off the ground. OpenAI was originally founded with the goal of developing AI safely, a goal the company has increasingly been accused of abandoning.

In a statement via youth-led organization Encode, Hinton said the following:

“OpenAI was founded as an explicitly safety-focused non-profit and made a variety of safety related promises in its charter. It received numerous tax and other benefits from its non-profit status. Allowing it to tear all of that up when it becomes inconvenient sends a very bad message to other actors in the ecosystem.”

Encode is also supporting Musk’s lawsuit, saying OpenAI should have to honor the terms under which it was founded.

“The public has a profound interest in ensuring that transformative artificial intelligence is controlled by an organization that is legally bound to prioritize safety over profits,” said Nathan Calvin, Encode’s Vice President of State Affairs and General Counsel. “OpenAI was founded as a non-profit in order to protect that commitment, and the public interest requires they keep their word.”

Concerns Over OpenAI Leadership Emerged Early On

While OpenAI and CEO Sam Altman have recently been accused of abandoning their original goal of safe AI development, documents released by OpenAI themselves demonstrate that some co-founders had those concerns early on—especially regarding Altman.

Emails between the co-founders in late 2017—shortly before Musk left the company—show that Ilya Sutskever and Greg Brockman were concerned about Altman’s judgment and where his judgment might lead the company.

Sam:

When Greg and I are stuck, you’ve always had an answer that turned out to be deep and correct. You’ve been thinking about the ways forward on this problem extremely deeply and thoroughly. Greg and I understand technical execution, but we don’t know how structure decisions will play out over the next month, year, or five years.

But we haven’t been able to fully trust your judgments throughout this process, because we don’t understand your cost function.

  • We don’t understand why the CEO title is so important to you. Your stated reasons have changed, and it’s hard to really understand what’s driving it.
  • Is AGI truly your primary motivation? How does it connect to your political goals? How has your thought process changed over time?

Interestingly, Sutskever was one of the leaders of the boardroom coup that saw Altman ousted from the company in late 2023. Hinton later expressed that he was proud of Sutskever, who was a former student of his, for firing Altman.

“I’d also like to acknowledge my students,” Hinton says in a video. “I was particularly fortunate to have many very clever students, much clever than me, who actually made things work. They’ve gone on to do great things.

“I’m particularly proud of the fact that one of my students fired Sam Altman, and I think I better leave it there and leave it for questions.”

Hinton went on to say talk about how OpenAI had strayed from its original goal, thanks largely to Altman’s leadership.

“So OpenAI was set up with a big emphasis on safety,” he continues. “Its primary objective was to develop artificial general intelligence and ensure that it was safe.

“One of my former students Ilya Sutskever, was the chief scientist. And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that’s unfortunate.”

OpenAI Has Lost Top Talent Over Safety Concerns

OpenAI’s growing reputation for prioritizing profits over its original goal of safety has cost the company some of its top talent.

Multiple top executives, engineers, and researchers, including Sutskever, Jan Leiki, Mira Murati, Jeffrey Wu, and Gretchen Kreuger, have left the company. Several have written scathing denunciations of OpenAI’s approach, while others have warned that no one—including OpenAI—is ready for what is coming, in terms of AI development.

Ultimately, OpenAI is increasingly alienating itself and generating concern both inside and outside the company. Only time will tell if Musk’s lawsuit to force the company to stay true to its original goals will be successful but, in the meantime, it is certainly gaining major backing.

Subscribe for Updates

AITrends Newsletter

The latest news, updates and trends in AI.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.
Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us