Scarlett Johansson objects to AI voice similarity, highlighting identity risks; concerns raised about AI manipulation

From The Conversation: 2024-05-23 01:44:37

OpenAI responds to Scarlett Johansson’s objections regarding similarity of its ChatGPT voice to hers. Johansson declined to provide her voice for the system, leading to legal action to remove the voice. This highlights the potential identity harm AI can cause. AI development raises concerns about AI assistants forming relationships with users.

Legislating AI is challenging, with calls for a right to identity to protect against harms like deepfakes. AI can manipulate content to damage reputation or create false scenarios. Legal protections are insufficient to address these identity harms, making individuals vulnerable to malicious uses of AI technology.

US laws offer more protection against identity misappropriation compared to Australia. Cases involving sound-alikes like the Bette Midler case show legal precedent for protecting likeness rights. Australian laws lack explicit protections for public figures against identity misappropriation, highlighting the need for a rights-based approach similar to the EU’s dignity-focused model.

Identity or personality rights are crucial in the digital age to protect against identity theft and image manipulation. Scarlett Johansson’s legal actions in France show the importance of these rights, which enable individuals to seek damages and injunctions to preserve their privacy, dignity, and self-determination amidst AI technology advancements.



Read more at The Conversation: Scarlett Johansson’s row with OpenAI reminds us identity is a slippery yet important subject. AI leaves everyone’s at risk