OpenAI Co-Founder Involved in Boardroom Shakeup, Launches New Venture in “Safe Superintelligence”

Artificial Intelligence
(Photo : Pexels: Photo by Sanket Mishra)

Ilya Sutskever, a co-founder of OpenAI who participated in an unsuccessful attempt to remove CEO Sam Altman, has announced plans to launch a company focused on developing artificial intelligence with an emphasis on safety.

Sutskever, OpenAI, and Safe Superintelligence Inc. have become closely intertwined in the dynamic field of artificial intelligence (AI) research and development. As a co-founder of OpenAI, Sutskever has been a pivotal figure in pioneering advancements in AI, especially in areas such as large language models exemplified by ChatGPT.

The Safe Superintelligence Inc.

Sutskever, a renowned AI researcher who departed the company last month, announced on Wednesday via social media that he, along with two co-founders, has established Safe Superintelligence Inc., a company exclusively dedicated to the safe development of superintelligence

In a prepared statement, Sutskever and his co-founders, Daniel Gross and Daniel Levy, emphasized the company's commitment to avoiding "management overhead or product cycles," and outlined a business model where safety and security efforts would remain shielded from short-term commercial pressures. The trio also highlighted that Safe Superintelligence is an American company based in Palo Alto, California, and Tel Aviv, leveraging local connections to attract top technical talent.

As outlined by Sutskever, Safe Superintelligence is dedicated to developing safe and powerful artificial intelligence. Safety and capabilities are approached as concurrent challenges to be tackled through groundbreaking engineering and scientific advancements. SSI also plans to swiftly push the boundaries of AI capabilities while ensuring that safety considerations always take precedence.

Superintelligence Inc. shares similarities with the original concept of OpenAI, aiming to advance artificial general intelligence (AGI) beyond human capabilities. However, unlike OpenAI, which collaborated closely with Microsoft and developed revenue-generating products, Safe Superintelligence will concentrate solely on research and development, prioritizing AI advancements without immediate plans for commercialization.

Investing in this startup poses a considerable risk, as it hinges on Sutskever and his team achieving a breakthrough that could outpace competitors with more significant resources. The term superintelligence in the company's name alludes to an AI system surpassing human intelligence, a speculative and exceptionally ambitious goal.

READ ALSO: OpenAI Could Become A For-Profit Corporation, Aiming to Shift Away from Non-Profit Board Control Report Says

Involvement in Sam Altman's Ouster

Sutskever was involved in a failed attempt last year to remove Altman in a boardroom shakeup when internal turmoil sparked questions about whether OpenAI's leadership was prioritizing business opportunities over AI safety.

Sutskever co-led a team dedicated to the safe development of artificial general intelligence (AGI) at OpenAI, aiming to create AI systems surpassing human capabilities. Upon his departure from OpenAI, he mentioned having plans for a project with significant personal meaning but did not disclose specifics. Sutskever clarified that leaving OpenAI was his own decision.

Shortly after Sutskever's departure, his team co-leader Jan Leike resigned and criticized OpenAI for prioritizing flashy products over safety concerns. OpenAI later responded by forming a safety and security committee, predominantly staffed with internal company members.

RELATED ARTICLE: OpenAI Disbands Superalignment Team Responsible for Controlling AI Risks Amid Leadership Friction

Real Time Analytics