OpenAI AGI Head Resigns, Warns No One Is Ready for What’s Coming

Miles Brundage, OpenAI's Senior Advisor for AGI Readiness has resigned, warning that no one is ready for what's coming....
OpenAI AGI Head Resigns, Warns No One Is Ready for What’s Coming
Written by Matt Milano
  • Miles Brundage, OpenAI’s Senior Advisor for AGI Readiness has resigned, warning that no one is ready for what’s coming.

    Brundage took to Substack to explain his decision to leave what he described as his “dream job.” Overall, Brundage was complimentary of his time at OpenAI, saying he wanted to leave to have more independence to conduct the research he wants.

    The opportunity costs have become very high: I don’t have time to work on various research topics that I think are important, and in some cases I think they’d be more impactful if I worked on them outside of industry. OpenAI is now so high-profile, and its outputs reviewed from so many different angles, that it’s hard for me to publish on all the topics that are important to me. To be clear, while I wouldn’t say I’ve always agreed with OpenAI’s stance on publication review, I do think it’s reasonable for there to be some publishing constraints in industry (and I have helped write several iterations of OpenAI’s policies), but for me the constraints have become too much.

    Brundage also says he wants to be less biased in his research, something that is difficult to do when working for the leading AI firm.

    I want to be less biased: It is difficult to be impartial about an organization when you are a part of it and work closely with people there everyday, and people are right to question policy ideas coming from industry given financial conflicts of interest. I have tried to be as impartial as I can in my analysis, but I’m sure there has been some bias, and certainly working at OpenAI affects how people perceive my statements as well as those from others in industry. I think it’s critical to have more industry-independent voices in the policy conversation than there are today, and I plan to be one of them.

    The most interesting part of Brundage’s post involves his take on AGI (artificial general intelligence) and whether OpenAI and the industry is prepared for it.

    In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready.

    To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time (though I think the gaps remaining are substantial enough that I’ll be working on AI policy for the rest of my career).

    Whether the company and the world are on track for AGI readiness is a complex function of how safety and security culture play out over time (for which recent additions to the board are steps in the right direction), how regulation affects organizational incentives, how various facts about AI capabilities and the difficulty of safety play out, and various other factors.

    AGI Readiness Team Is Disbanding

    Based on Brundage’s post, it appears the AGI Readiness team is largely disbanding.

    The Economic Research team, which until recently was a sub-team of AGI Readiness led by Pamela Mishkin, will be moving under Ronnie Chatterji, OpenAI’s new Chief Economist. The remainder of the AGI Readiness team will be distributed among other teams, and I’m working closely with Josh Achiam on transfer of some projects to the Mission Alignment team he is building.

    Given his assessment that no one is prepared for AGI’s emergence, the disbanding of the team is an especially disturbing revelation.

    OpenAI’s Ongoing Issues

    Brundage’s departure adds to a growing list of executives and researchers who have left OpenAI. Since a boardroom coup briefly led to Sam Altman’s exodus last year, some of the company’s highest profile personnel have left the company after Altman’s return.

    Interestingly, some of the biggest departures—including former CTO Mira Murati and co-founder Ilya Sutskever—have come amid allegations over Altman’s leadership and concerns that OpenAI has lost its focus on safe AI development. Jan Leike, one of the co-leads of the team responsible for researching existential threats from AI, left the company when his team was disbanded. Unlike Brundage, he wrote a scathing rebuke of OpenAI’s safety culture.

    However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.

    I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.

    These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.

    Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.

    Building smarter-than-human machines is an inherently dangerous endeavor.

    OpenAI is shouldering an enormous responsibility on behalf of all of humanity.

    But over the past years, safety culture and processes have taken a backseat to shiny products.

    If Brundage is right, and AI firms and the world are truly not ready for AGI, it’s truly alarming to see some of OpenAI’s greatest minds leaving the firm when they’re needed most.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit