
Photo by BoliviaInteligente on Unsplash
Ilya Sutskever’s AI Startup Safe Superintelligence Receives $1B in Funding
September 4, 2024
Co-founded by Ilya Sutskever, formerly with OpenAI, AI startup Safe Superintelligence (SSI) has received $1 billion in funding to help achieve its mission. The company is working on safe artificial intelligence systems with capabilities to think faster and better than humans.
SSI will use the money to buy more computing power and bring on exceptional AI researchers and engineers. Those involved in the $1 billion funding include investment firms Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Daniel Gross, an SSI executive, and former GitHub CEO Nat Friedman also played a part.
“It’s important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market,” said Gross, as reported by Reuters.
According to Gross, job candidates are extensively vetted during the hiring process to determine if they have “good character.” Currently, with just 10 employees, the company is not necessarily looking for substantial credentials or even a lot of experience in the field, but rather someone with “extraordinary capabilities.”
SSI will partner with yet-unnamed cloud providers and chip manufacturers to help boost its computing power requirements. Sutskever believes an increase in computing power will lead to an advancement in AI performance.
With facilities in Palo Alto, California, and Tel Aviv, Israel, Sutskever started SSI alongside Gross in June. Prior to SSI, Sutskever was chief scientist at OpenAI and Gross was in charge of AI development at Apple. SSI’s chief aim is to develop safeguards against rogue AI causing unforeseen harm to humanity, even stopping the potential extinction of humans.
While SSI takes on the possible dangers of AI, lawmakers are also taking action. California legislators recently passed a bill, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, requiring AI development companies to rigorously test the technology for disaster scenarios before reaching the public. Companies not doing so risk being shut down by the state’s attorney general.
Recent News
