OpenAI in Turmoil: Leadership Exodus and the Shift Toward Profit at Any Cost

Sam Altman’s leadership has brought the company to the forefront of AI development, but at what cost? With key figures departing, safety concerns growing, and the company’s mission shifting, the q...
OpenAI in Turmoil: Leadership Exodus and the Shift Toward Profit at Any Cost
Written by Rich Ord
  • The ongoing shifts at OpenAI are raising eyebrows, not just because of the company’s growing dominance in the AI space but because of the internal chaos accompanying it. Since its founding in 2015, OpenAI has evolved dramatically—from a non-profit research lab focused on advancing artificial general intelligence (AGI) for the public good to a profit-driven tech giant. Today, it’s mired in leadership turbulence, existential questions about its mission, and a strategic pivot toward monetization that’s shaking the foundation of its original purpose.

    At the center of this evolution is Sam Altman, the CEO who survived a coup in late 2023. The move to oust Altman came from concerns within the organization’s board that he was drifting too far from OpenAI’s core values. While the coup failed within days, Altman has since consolidated power, restructuring the leadership team and driving the company toward commercial goals. Former Chief Technology Officer (CTO) Mira Murati’s recent exit, alongside key researchers like Ilya Sutskever, illustrates the broader unrest within the organization.

    Catch our chat on the chaos at OpenAI as profits take priority!

     

    Leadership Shifts Post-Coup

    The failed attempt to remove Sam Altman from OpenAI’s helm was a turning point in the company’s recent history. When the board initially ousted Altman, it was seen as an internal revolt driven by concerns over transparency and decision-making. Former CTO Mira Murati and co-founder Ilya Sutskever reportedly raised concerns over Altman’s leadership style, describing him as pitting executives against each other. “Altman’s leadership had become divisive,” an insider revealed, adding that “he had lost the trust of those most committed to the mission of safe AI.”

    However, the decision to oust Altman backfired almost immediately. Within days, as investors and employees rallied behind Altman, Murati and Sutskever reversed their positions, calling for Altman’s reinstatement. While the immediate coup ended with Altman back in control, the tension didn’t dissipate. In the months following, the company saw a string of high-profile departures, including Sutskever and safety researcher Jan Leike, both critical of the company’s evolving priorities.

    Murati’s sudden departure in late September 2024 was another shock to the system. Known for her technical prowess and operational leadership, Murati’s exit signaled a deeper shift in OpenAI’s corporate culture. Altman acknowledged the abrupt nature of her resignation but framed it as part of a natural transition for a company in rapid growth mode: “I won’t pretend it’s natural for this to be so abrupt,” he said in a company-wide message, “but we are not a normal company.”

    The Shift from Research to Profit

    At its core, OpenAI was founded on principles of AI safety, research, and transparency. When Elon Musk, Sam Altman, and others launched OpenAI in 2015, it was heralded as a nonprofit organization with a clear mission: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” But as OpenAI’s ambitions and the scale of its research grew, so did its need for funding.

    In 2019, OpenAI transitioned to a capped-profit model, creating a for-profit subsidiary to attract the billions of dollars required for advanced AI development. With investors like Microsoft pumping in billions, the stakes—and the expectations—skyrocketed. Yet the hybrid structure of a nonprofit overseeing a profit-driven arm created tension. According to former researcher Jeffrey Wu, who worked on models like GPT-2 and GPT-3, “Restructuring around a core for-profit entity formalizes what outsiders have known for some time: OpenAI is seeking to profit in an industry that has received an enormous influx of investment in the last few years.”

    This shift culminated in recent reports that OpenAI would restructure into a full-fledged for-profit company, allowing investors to reap unlimited returns. “This is a complete break from the original ethos of the organization,” commented Sarah Kreps, director of Cornell’s Tech Policy Institute. She added that the move signaled a departure from OpenAI’s “founding emphasis on safety, transparency, and an aim of not concentrating power.”

    Rushed Product Launches and Safety Concerns

    One of the most contentious points within OpenAI has been the balance between rapid commercialization and AI safety. The company has developed a reputation for rushing product launches to outpace competitors like Google and Anthropic. One former employee described the internal culture as increasingly “product-first,” noting that safety protocols are sometimes bypassed in the rush to deploy new AI models. A key example is the launch of GPT-4o, an AI model released earlier this year.

    Safety staffers working on GPT-4o were reportedly given just nine days to complete safety checks before launch—a deadline some found impossible to meet. “We were pulling 20-hour days,” said one safety researcher, “but there was no way we could properly assess the risks in such a short time frame.” After the launch, concerns were raised about the model’s ability to create persuasive content, which could potentially lead users toward dangerous behaviors. Yet, the company pressed forward, citing competitive pressures.

    This focus on rapid product cycles has worried many within the AI safety community. Jan Leike, who left OpenAI to join competitor Anthropic, remarked in a statement: “Over the past years, safety culture and processes have taken a back seat to shiny products.” These concerns are echoed by others, who fear that OpenAI’s focus on commercializing AI tools, like its widely-used ChatGPT, may come at the expense of longer-term safety initiatives.

    The Financial Pressure Behind OpenAI’s Transformation

    OpenAI’s rapid shift toward commercialization is driven in part by the enormous financial pressure the company faces. With billions of dollars invested by Microsoft, Thrive, Apple, and other entities, OpenAI has been burning through capital as it scales up its models. Current estimates suggest OpenAI is losing billions annually despite projected revenues of around $4 billion. “We can’t sustain this level of growth without significant investment,” Altman reportedly told staff in an internal meeting.

    The latest funding round, expected to close at $6.5 billion, values the company at a staggering $150 billion. Yet even with that influx of cash, OpenAI is expected to shift toward a more traditional for-profit model, potentially going public within the next few years. “There’s simply no other way to attract the level of capital we need to compete in this space,” a senior executive told Fortune.

    But with this shift comes a significant risk. OpenAI’s original nonprofit foundation will likely be reduced to a minority stakeholder, and with it, the company’s mission of developing AI in the public interest could fade. “If you remove the profit cap, you’re fundamentally changing the nature of the organization,” said Jacob Hilton, a former OpenAI employee. “This isn’t just a legal issue—it’s an ethical one.”

    A Leadership Crisis

    As OpenAI transitions into its next phase, one thing remains clear: the leadership crisis has only deepened. In addition to the departures of key figures like Murati and Sutskever, the company is grappling with internal discontent. President Greg Brockman, a long-time Altman ally, has taken a sabbatical, and other senior researchers have defected to competitors like Anthropic.

    “When I think about OpenAI, I think about Greg, and I think about Ilya,” said one former employee. “With no Ilya, it’s a different company. With no Greg, it’s a very different company.” Even Altman has acknowledged the challenges of retaining top talent, but he remains optimistic about the company’s future: “I hope OpenAI will be stronger for it, as we are for all of our transitions,” he said during a recent appearance in Italy.

    OpenAI’s transformation may be inevitable given the scale of its ambitions, but the costs—both ethical and operational—are mounting. For investors, this shift may bring financial returns, but for those who joined the company with the goal of advancing AGI for the benefit of humanity, it feels like a betrayal. As one former researcher put it, “We were supposed to be building the future—now it just feels like another tech company chasing profits.”

    OpenAI’s journey from a nonprofit AI research lab to a profit-driven tech giant reflects broader tensions in the tech industry as companies seek to balance innovation, safety, and financial returns. Sam Altman’s leadership has brought the company to the forefront of AI development, but at what cost? With key figures departing, safety concerns growing, and the company’s mission shifting, the question remains: What is the future of OpenAI? Will it continue to lead the AI revolution, or has it lost sight of its original purpose?

    This article includes quotes from social media posts, corporate blogs, and various news sources, including The Financial Times, The Verge, Vox, Fortune, and the Wall Street Journal.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit