Meta on Wednesday said that advertisers will soon have to disclose when artificial intelligence (AI) or other software is used to create or alter imagery or audio in political ads.
The requirement will take effect globally at Facebook and Instagram at the start of next year, parent-company Meta said.
“Advertisers who run ads about social issues, elections and politics with Meta will have to disclose if image or sound has been created or altered digitally, including with AI, to show real people doing or saying things they haven’t done or said,” Meta global affairs president Nick Clegg said in a Threads post.
Advertisers will also have to reveal when AI is used to create completely fake yet realistic people or events, according to Meta.
Meta will add notices to ads to let viewers know what they are seeing or hearing is the product of software tools, the company said.
In addition, Meta’s fact checking partners, which include a unit of AFP, can tag content as “altered” if they determine it was created or edited in ways that could mislead people, including through the use of AI or other digital tools, the company said.
Fears of increasingly powerful AI tools include the potential for them to be used to deceive voters during elections.
Microsoft this week announced new measures it will take as part of its efforts to help protect elections from “technology-based threats” such as AI.
“The world in 2024 may see multiple authoritarian nation states seek to interfere in electoral processes,” Microsoft chief legal officer Brad Smith and corporate vice president Teresa Hutson said in a blog post.
“And they may combine traditional techniques with AI and other new technologies to threaten the integrity of electoral systems.”
Tools Microsoft plans to release early next year include one that enables candidates or campaigns to embed “credentials” in images or video they produce.
“These watermarking credentials empower an individual or organization to assert that an image or video came from them while protecting against tampering by showing if content was altered after its credentials were created,” Smith and Hutson said in the post.
Microsoft said it will also deploy a team to help campaigns combat AI threats such as cyber influence campaigns and fake imagery.