Former OpenAI Scientist’s New AI Startup Raises $1 Billion, Aiming to Build Safe Superintelligence

Sutskever’s team, which currently consists of just 10 employees, is split between Silicon Valley and Tel Aviv. Much of the $1 billion funding will go toward acquiring computing power and hiring top-...
Former OpenAI Scientist’s New AI Startup Raises $1 Billion, Aiming to Build Safe Superintelligence
Written by Rich Ord
  • In a bold move that signals the continued appetite for artificial intelligence (AI) innovation, Safe Superintelligence Inc. (SSI), a startup co-founded by former OpenAI Chief Scientist Ilya Sutskever, has raised a staggering $1 billion in its seed funding round. This remarkable milestone, achieved just months after SSI’s formation in June 2024, highlights the growing emphasis on both AI capabilities and safety as the industry grapples with the potential risks posed by advanced machine learning models.

    “We’ve started the world’s first straight-shot SSI lab with one goal and one product: safe superintelligence,” Sutskever said, explaining the singular focus of his new venture. The ambitious company aims to develop artificial intelligence that surpasses human intelligence while ensuring these advancements remain aligned with human values and safety concerns. The valuation of the three-month-old startup at $5 billion is a testament to the high expectations investors have for this emerging player.

    A Shift in Focus: Safe AI at the Forefront

    Sutskever’s departure from OpenAI earlier this year came after a period of internal strife, which included the controversial ousting of CEO Sam Altman. While Sutskever expressed regret over his role in the decision, his move to form SSI marks a definitive pivot. He left behind a company increasingly focused on monetizing AI technology in favor of building a research-centric startup solely committed to AI safety.

    “Our singular focus means no distraction by management overhead or product cycles,” SSI wrote in its mission statement. “Our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

    The emphasis on safety sets SSI apart from other AI startups, many of which prioritize commercial applications and consumer-facing products. The AI community has long debated how to balance the rapid development of AI capabilities with the imperative to mitigate risks. SSI aims to solve both problems by advancing AI technology while ensuring that safety protocols remain a step ahead.

    Investors Flock to SSI’s Vision

    SSI’s funding round attracted some of the biggest names in venture capital, including Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Nat Friedman, who co-leads the NFDG partnership with SSI CEO Daniel Gross, was also a key investor. This massive infusion of capital comes amid a broader trend of venture capitalists betting heavily on AI, particularly on startups with high-profile founders and technical expertise.

    According to industry insiders, the $1 billion funding round reflects confidence not just in the startup’s potential, but in Sutskever’s pedigree. As a co-founder of OpenAI and one of the key minds behind GPT-4, Sutskever is seen as a leading authority in AI research. His departure from OpenAI, coupled with his commitment to safety, has galvanized investor interest.

    “It’s not about the product—it’s about the person,” said a venture capitalist familiar with the deal. “Investors are backing the talent and the vision. Ilya Sutskever has already proven he can take AI to new heights, and with SSI, he’s positioned to push the boundaries even further, while keeping a focus on safety.”

    The Challenge Ahead: Safety vs. Speed

    SSI’s mission to build “safe superintelligence” is both ambitious and fraught with challenges. Sutskever’s team, which currently consists of just 10 employees, is split between Silicon Valley and Tel Aviv. Much of the $1 billion funding will go toward acquiring computing power and hiring top-tier talent, a necessity given the computational demands of training large-scale AI models.

    However, SSI’s focus on safety may place it in direct competition with other AI firms that are pushing the boundaries of AI capabilities without the same level of oversight. OpenAI, for instance, has continued to forge ahead with its commercial ventures, including partnerships with Microsoft, while maintaining its long-term goal of achieving artificial general intelligence (AGI). Meanwhile, competitors like Anthropic, founded by former OpenAI employees, have taken a similar approach to safety-focused AI development.

    Critics of SSI’s approach argue that prioritizing safety could slow down innovation. As Brandon Purcell, an analyst at Forrester Research, put it, “The race to develop AI is intense, and safety measures, while crucial, can sometimes get in the way of progress. SSI’s challenge will be to balance these competing priorities—ensuring safety without losing its edge in the innovation race.”

    A $1 Billion Bet on the Future of AI

    Despite these concerns, the $1 billion funding round is a clear signal that investors believe in the long-term potential of safe AI. For venture capitalists, the decision to back SSI represents a bet on both the future of AI and the importance of maintaining ethical and safety standards in its development.

    “We see this as the next frontier of AI development,” said one investor involved in the funding round. “The question isn’t whether AI will surpass human intelligence—it’s how we ensure that when it does, it remains aligned with human interests. SSI is leading that charge.”

    SSI’s singular focus on safety comes at a time when governments and regulators are increasingly scrutinizing the AI industry. In California, for instance, lawmakers are considering a bill that would impose stringent safety regulations on AI companies, a move that has divided the tech community. While companies like OpenAI and Google have expressed concerns about the potential for overregulation, others, including SSI, have embraced the idea of greater oversight.

    This Isn’t About Scaling Quickly

    The startup plans to scale its operations and recruit top-tier researchers and engineers dedicated to advancing AI safety in the coming months. “We’re building a small, trusted team of the world’s best talent,” Gross said in a statement. “This isn’t about scaling quickly—it’s about scaling safely.”

    For Sutskever, the journey from OpenAI co-founder to leader of a billion-dollar startup has been a whirlwind. But with SSI, he is determined to chart a new path, one that prioritizes safety and long-term progress over short-term gains. As he put it on social media following the announcement of the funding round: “Mountain: identified. Time to climb.”

    The challenge ahead is immense and could literally change the world. Safe superintelligence, if achievable, could revolutionize the way humans interact with machines, unlocking new possibilities for AI to solve complex global problems. But for Sutskever and his team, the journey to get there will require not just technical expertise but a relentless focus on ensuring that AI remains a force for good.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit