AI Titans Meta, Microsoft, Nvidia Collaborate With White House On Safety Standards

From Nasdaq Inc.:

The Biden Administration is enlisting tech companies and banks to address the risks of artificial intelligence. The initiative includes the U.S. AI Safety Institute Consortium (AISIC), uniting AI developers with academics, researchers, and civil society organizations. The goal is to draft development and deployment guidelines, and approaches to risk management and other safety issues.

Some guidelines include the use of watermarks to identify AI-generated audio and visual content. Facebook, Instagram, and Microsoft’s partner OpenAI are among the companies stepping up AI regulatory efforts. The European Union is compiling guidelines for big tech companies to safeguard the democratic process from AI-generated misinformation or deepfake imagery.

Europe has drafted a document, which presents potential compliance costs for companies and concerns over possible implications for innovation. Companies worry that regulations may lag behind AI advancements. The recent U.S Cybersecurity and Infrastructure Security Agency and U.K. National Cyber Security Centre agreement focuses on AI safety from hackers and rogue actors, centered on the principle of “secure by design.”

The evolving artificial intelligence industry is outpacing regulators’ ability to legislate. Nigel Portman and Mike Williams, intellectual property lawyers at Marks & Clerk, express concerns over regulations. They say that regulations may be outdated before they come into force due to the industry’s exponential evolution. The U.S. Cybersecurity and Infrastructure Security Agency and the U.K. National Cyber Security Centre signed an agreement focusing on AI safety from hackers and rogue actors and a principle of “secure by design” to consider safety at the development stage of new AI deployments.



Read more: AI Titans Meta, Microsoft, Nvidia Collaborate With White House On Safety Standards