OpenAI Whistle-Blowers Describe Reckless and Secretive Culture
From The New York Times: 2024-06-05 23:33:45
A group of nine current and former employees at OpenAI are speaking out against what they perceive as a dangerous and secretive work culture at the company. They allege that OpenAI is prioritizing profits over safety as it races to build powerful artificial general intelligence (A.G.I.) systems.
The insiders claim that OpenAI has prevented workers from voicing technology concerns using restrictive nondisparagement agreements. They are calling for greater transparency and protections for whistle-blowers at leading A.I. companies, including OpenAI.
The campaign comes after two senior A.I. researchers, Ilya Sutskever and Jan Leike, left OpenAI due to concerns about powerful A.I. risks. Former employees also express concerns over the company’s safety culture being sidelined for the sake of product development.
Mr. Kokotajlo, a former researcher at OpenAI, predicts a 50% chance of A.G.I. arriving by 2027 and a 70% chance that advanced A.I. could harm humanity. He raised alarms about the lack of risk mitigation protocols at OpenAI, citing examples of safety concerns being overlooked.
OpenAI faced legal battles with creators and public criticism over the use of copyrighted works, as well as disputes with public figures like Scarlett Johansson over voice imitation. The company is implementing a new safety and security committee to explore risks associated with its A.I. models.
Read more at The New York Times: OpenAI Whistle-Blowers Describe Reckless and Secretive Culture