OpenAI head of alignment resigns citing safety concerns, calls for shift towards safety-first approach
From India Today.: 2024-05-20 00:50:03
Jan Leike, head of alignment at OpenAI, resigns on May 17, 2024, ending significant AI research era. Departure announced via tweets, citing team accomplishments like RLHF language model launch and advancements in interpretability. Leike expresses love for team but departs due to concerns about company’s focus on AI system control, safety, and societal impact.
Leike’s departure from OpenAI reveals deep concerns about focus on AI system control and safety. Expresses worries over necessary attention and resources for future generations of AI models, security, monitoring, safety, alignment, and societal impact. Calls for shift towards safety-first AGI company and critical cultural changes to prioritize safety over “shiny products.”
Leike’s resignation emphasizes importance of careful consideration of safety and ethical implications in AI development. Urges OpenAI employees to embrace necessary cultural changes for building AGI. Warns of dangers of developing machines smarter than humans and stresses company’s responsibility to humanity. Calls for serious approach to building AGI.
Published by Ankita Chakravarti on May 20, 2024.
Read more at India Today.: OpenAI top executive resigns, says safety has taken a backseat at the company