AI on computer

©metamorworks via Canva.com

Former OpenAI Chief Scientist Ilya Sutskever Launches New AI Venture

June 20, 2024

One month after departing OpenAI, co-founder Ilya Sutskever announced the launch of a new company named Safe Superintelligence Inc. (SSI) on X.

In the post, he said it will be a startup with “one goal and one product,” which is to create a safe and powerful AI system.

Sutkever said in another post on X, “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team.”

With offices in Palo Alto and Tel Aviv and headquarters being based in the U.S., Sutskever will join forces with Daniel Gross, a former Y Combinator partner, and Daniel Levy, a former OpenAI engineer, to establish SSI.

SSI stated that the company initiated the very first straight-shot superintelligence lab in the world and is actively on the search for talented professionals.

In a post, the startup said, “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

SSI’s announcement explained the startup as prioritizing its efforts on safety and capability simultaneously, enabling rapid advancement of its AI systems. According to The Verge, the announcement also highlighted the external pressures typically experienced by AI teams at major companies such as Google, OpenAI, and Microsoft.

During his time at OpenAI, Sutskever played a crucial role in advancing AI safety during the emergence of “superintelligent” AI systems. He worked closely on this effort with Jan Leike, who was a co-leader of OpenAI’s Superalignment team. However, both Sutskever and Leike departed from OpenAI in May 2024, following a big disagreement with the leadership regarding AI safety strategies. Since leaving, Leike has been the lead of a team at rival AI firm Anthropic.

In a blog post published last year, Sutskever and Leike predicted that AI more intelligent than humans could emerge within the next decade. They emphasized the need for research into ways to control and limit such AI advancements, warning that these technologies may not inherently prioritize safety and goodwill.

The announcement about this new venture stated, “SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.”

It continued, “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.”