OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’


Ilya Sutskever, renowned for his groundbreaking work at OpenAI, has embarked on a new venture that promises to shape the future of artificial intelligence. After departing from OpenAI in May, Sutskever has joined forces with Daniel Levy, a fellow OpenAI alumnus, and Daniel Gross, formerly of Apple’s AI division, to establish Safe Superintelligence Inc. (SSI). This startup is dedicated to creating advanced AI systems that prioritize safety and ethical considerations.
SSI's formation comes in the wake of significant events at OpenAI, including the controversial ousting of CEO Sam Altman in November 2023, a situation in which Sutskever played a pivotal role. Reflecting on this episode, Sutskever expressed regret, which likely influenced his decision to focus exclusively on the safe development of AI.
The Mission of Safe Superintelligence Inc.
SSI's mission is succinctly articulated on their website: “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.”
This mission underscores a dual commitment to both advancing AI capabilities and ensuring these advancements do not outpace safety measures. The founders believe that by maintaining a singular focus on this goal, they can avoid the distractions of management overhead or product cycles, and insulate their progress from short-term commercial pressures.
A Singular Focus on Safe AI
One of the defining features of SSI is its unwavering focus on a single goal: developing safe superintelligence. This approach sets SSI apart from other major AI labs such as OpenAI, DeepMind, and Anthropic, which have diversified their research agendas over the years. SSI’s founders are confident that this concentrated effort will enable them to achieve their objectives more efficiently and effectively.
Sutskever’s work at SSI is a natural progression from his previous role at OpenAI, where he was instrumental in the superalignment team. This team was responsible for designing control methods for powerful new AI systems, a crucial task given the potential risks associated with AI. However, this group was disbanded following Sutskever’s departure, highlighting the challenges of maintaining such efforts within larger organizations.
The Road Ahead
The journey towards safe superintelligence is fraught with challenges, both technical and philosophical. Critics argue that achieving safe AI is as much about addressing fundamental ethical questions as it is about engineering solutions. However, the impressive pedigree of SSI’s founding team lends significant credibility to their mission.
As SSI embarks on this ambitious endeavor, the broader AI community and industry observers will be watching closely. The stakes are high, and the potential impact of their work cannot be overstated. The founders' commitment to advancing AI capabilities while ensuring safety could pave the way for a new era of responsible AI development.
Conclusion
Safe Superintelligence Inc. represents a bold and necessary step forward in the quest to develop AI systems that are not only advanced but also safe and ethical. With Ilya Sutskever at the helm, supported by a team of seasoned experts, SSI is poised to make significant strides in this critical area. The success of SSI could redefine the landscape of artificial intelligence, ensuring that the future of AI is both innovative and secure.