OpenAI safety leader shifts focus to new research project amid rising concerns about AI risks.

From Verdict Media Limited: 2024-07-24 06:31:25

OpenAI’s safety leader Aleksander Madry is shifting focus to a new research project, leaving his role temporarily to Joaquin Quinonero Candela and Lilian Weng. The move comes as OpenAI enhances its safety team amid rising concerns about risks associated with large language AI models. Countries worldwide are rushing to regulate AI use as it becomes more prevalent online and in business.

Backed by Microsoft, OpenAI recently established a Safety and Security Committee to guide ethical decisions as it moves towards releasing its latest AI model. The committee will provide oversight on safety and ethics as OpenAI sets sights on achieving artificial general intelligence (AGI), a state where AI surpasses human knowledge. GlobalData projects the global AI market to surpass $1 trillion by 2030, with an estimated 39% compound annual growth rate from 2023. Tech sentiment polls suggest over 20% of businesses have already integrated AI into their operations by 2024.



Read more at Verdict Media Limited: OpenAI safety leader switches duties to work on ‘important project’