Ilya Sutskever’s AI Startup, Safe Superintelligence, Raises $1 Billion

SSI, with a current team of 10 employees, plans to use the funds to acquire computing power and hire top talent. The post Ilya Sutskever’s AI Startup, Safe Superintelligence, Raises $1 Billion appeared first on AIM.

Ilya Sutskever’s AI Startup, Safe Superintelligence, Raises $1 Billion

INCREASE YOUR SALES WITH NGN1,000 TODAY!

Advertise on doacWeb

WhatsApp: 09031633831

To reach more people from NGN1,000 now!

INCREASE YOUR SALES WITH NGN1,000 TODAY!

Advertise on doacWeb

WhatsApp: 09031633831

To reach more people from NGN1,000 now!

INCREASE YOUR SALES WITH NGN1,000 TODAY!

Advertise on doacWeb

WhatsApp: 09031633831

To reach more people from NGN1,000 now!

Ilya Sutskever Leaves OpenAI

Former OpenAI chief scientist, Ilya Sutskever’s AI startup, Safe Superintelligence, announced that it raised $1B from NFDG, a16z, Sequoia, DST Global, and SV Angel.

SSI, with a current team of 10 employees, plans to use the funds to acquire computing power and hire top talent. The company aims to build a small, highly trusted team of researchers and engineers, with operations in both Palo Alto, California, and Tel Aviv, Israel, according to a Reuters report.

While the company declined to disclose its valuation, sources close to the matter revealed it to be $5 billion. The funding highlights that some investors are still willing to make significant bets on exceptional talent focused on foundational AI research. This is despite a general decline in interest in funding such companies, which can be unprofitable for extended periods, leading several startup founders to leave for tech giants, the report added.

“It’s important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market,” Gross said in an interview.

SSI plans to partner with cloud providers and chip companies to meet its computing power needs, though it has not yet decided which firms it will collaborate with. AI startups often rely on companies like Microsoft and Nvidia to support their infrastructure requirements.

Sutskever, an early proponent of the scaling hypothesis—which suggests that AI models improve with increased computing power—played a key role in sparking a surge of AI investments in chips, data centers, and energy. This foundation has enabled advances in generative AI, such as ChatGPT.

While Sutskever mentioned that he will approach scaling differently than his previous employer, he did not provide further details.

“Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?” he said.

“Some people can work really long hours and they’ll just go down the same path faster. It’s not so much our style. But if you do something different, then it becomes possible for you to do something special.”

Sutskever founded Safe Superintelligence in June. The company, headquartered in Palo Alto with offices in Tel Aviv, is led by Sutskever, entrepreneur and investor Daniel Gross, and former OpenAI employee Daniel Levy. Gross previously co-founded the AI startup Cue, which Apple acquired in 2013 for $40-60 million.

SSI has established the world’s first lab dedicated solely to developing safe superintelligence. The company’s mission is clear: to build a safe superintelligence. 

“We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team,” said Sutskevar. 

The company emphasised that safety and capabilities will be addressed simultaneously as technical problems require revolutionary engineering and scientific breakthroughs. SSI aims to advance capabilities rapidly while ensuring that safety remains paramount. 

Sutskever left OpenAI in May, where he was succeeded by Jakub Pachocki. Last year, reports surfaced that Sutskever was concerned about AGI safety and the rapid pace at which OpenAI was advancing, leading to tensions with OpenAI chief Sam Altman. 

On November 17, 2023, Sutskever and other board members fired Altman. However, by November 21, 2023, the board’s decision was reversed, and Altman was reinstated as CEO. Sutskever publicly expressed regret for his role in the coup, stating that he never intended to harm OpenAI and deeply regretted his participation in the board’s actions.

The post Ilya Sutskever’s AI Startup, Safe Superintelligence, Raises $1 Billion appeared first on AIM.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow