Every U.S. citizen must have the capacity to know if government videos are authentic or faked with AI, says White House tech policy director



As the public panics about deepfakes and wholly convincing scams enabled by generative artificially intelligent technologies, the White House is trying to serve as an authentication role model and guard dog.

“When the government puts out an image or video every citizen should have the capacity to know that it is the authentic material provided by their government,” said Aratii Prabhakar, director of the White House’s Office of Science and Technology Policy, at the Fortune Brainstorm AI conference on Monday.

Prabhakar touched on measures outlined in President Joe Biden’s Executive Order on AI. As part of the October legislation, Biden announced that federal agencies will use tools developed in partnership with the Department of Commerce to develop guidance for content authentication and watermarking to demarcate AI-generated materials, setting “an example for the private sector and governments around the world.” The Executive Order also announced that big LLM providers will have to share the results of their safety tests with the federal government, among other measures to protect consumers from the threats of AI.

“Watermarking, so you know whether the media you’re looking at is authentic or not, is one piece of a much broader set of actions” that the federal government believes will help prevent AI-powered scams, Prabhakar said in an onstage interview with Fortune CEO Alan Murray. 

Though neither the Order nor Biden provided significant additional detail on the implementation process or extent of watermarking, Prabhakar said the US was an international role model for AI policy. “This executive order that the President signed at the end of October represents the first broad cohesive action taken anywhere in the world on artificial intelligence,” she said. “It really reflects our capacity to deal with this fast-moving technology.”

That said, the European Union recently released its Artificial Intelligence Act, which lays out a broad set of policies around AI in the private and government sectors.

The EU regulators’ actions address deeper concerns about abuse, misuse and malicious aspects of profit-driven large language model technology.  When Fortune’s Murray asked Prabhakar about her greatest concerns for the abuse of the large language technology, the White House director discussed concerns about training data. “The applications are raw, that means the implications and risks are very broad,” she said, adding that they can “play out sometimes over a lifetime.”  

With her foreign counterparts hammering out the policies of the European AI Act in the next couple of weeks, Prabhakar said the Biden executive order was about “laying the groundwork” to get “future wins” mitigating the risks of AI. She did not offer concrete details about what Americans can expect about the future of federal AI legislation.

But she noted that the federal government is developing various technologies to protect Americans’ privacy. This includes the use of cryptographic tools funded by the Research Coordination Network to protect consumers’ privacy as well as the and the evaluation of consumer privacy techniques deployed by AI-centric corporations. 

Read more from the Fortune Brainstorm AI conference:

Legendary Silicon Valley investor Vinod Khosla says the existential risk of sentient AI killing us is ‘not worthy of conversation’

Accenture CTO says ‘there will be some consolidation’ of jobs but ‘the biggest worry is of the jobs for people who won’t be using generative AI’

Most companies using AI are ‘lighting money on fire,’ says Cloudflare CEO Matthew Prince

Overthinking the risks of AI is its own risk, says LinkedIn cofounder Reid Hoffman: ‘The important thing is to not fumble the future’

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.



Original: Fortune | FORTUNE: Every U.S. citizen must have the capacity to know if government videos are authentic or faked with AI, says White House tech policy director