Ilya Sutskever starting Safe Superintelligence Inc. to develop safe artificial superintelligence
From Futurism: 2024-06-20 13:47:03
Ilya Sutskever, former OpenAI member, is launching Safe Superintelligence Inc. to develop safe artificial superintelligence, aiming to prevent harm to humanity at a large scale through breakthroughs by a small team. The intention is to create a force for good based on values like liberty and democracy, according to Sutskever.
There are speculations of a connection between Sutskever’s new venture and past disagreements at OpenAI, particularly regarding safety concerns around the Q* project. Sutskever remains vague about the reasons for founding the firm and its revenue model, but cofounder Daniel Gross assures capital won’t be an issue. SSI’s focus on higher-level AI research, combined with the founders’ backgrounds, positions it as a notable OpenAI competitor with lofty intentions.
Read more at Futurism: OpenAI Scientist Ousted After Failed Coup Against Sam Altman Is Starting a New AI Company