More than two dozen international civil society organizations will call on major tech firms to bolster their AI policies to combat “sexist and misogynistic” disinformation plaguing social media platforms, according to the draft of an open letter seen by AFP on Thursday.
The letter to the chief executives of six giants — Meta, X, YouTube, TikTok, Snapchat, and Reddit — follows an online boom in non-consensual deepfake porn as well as harassment and scams enabled by cheap, widely available artificial intelligence tools.
“It’s evident that these harms are not felt equally,” said the letter, signed by 27 digital and human rights organizations including UltraViolet, GLAAD, the National Organization for Women, and MyOwn Image.
“Specifically, women, trans people, and nonbinary people are uniquely at risk of experiencing adverse impacts of AI-based content on social media.”
The letter, which the groups said will be made public on Friday, made a dozen recommendations to strengthen AI policies.
Those include clearly defining the consequences for posting non-consensual explicit material — which should include suspension of repeat offenders — implementing a third-party tool to detect AI-generated visuals, and clear labelling of such content.
The groups also demanded a coherent procedure for users to flag and report harmful content and that platforms carry out comprehensive annual audits of its AI policies.
The letter comes barely a month before what is widely billed as America’s first AI election on November 5. The tight race to the White House has seen a firehose of disinformation.
A particular target of gendered disinformation is Democratic Party nominee Kamala Harris, which has included a flood of misogynistic and sexist narratives attacking the first Black, South Asian and woman vice president in US history.
“These harms silence us online, violate our right to control our own image, and distort our elections,” said Jenna Sherman, the campaign director at UltraViolet.
“But worse, they normalize and even algorithmically codify sexual exploitation and reinforce harmful stereotypes about gender, sexuality, and consent.”
The proliferation of non-consensual deepfakes is outpacing efforts to regulate the technology globally, experts say, with several photo apps digitally undressing women and manipulated images fueling “sextortion” rackets.
While celebrities such as singer Taylor Swift and actress Emma Watson have been victims of deepfake porn, experts say women not in the public eye are equally vulnerable.
“AI technologies have further facilitated the creation and spread of gender-based harassment and abuse online,” said Ellen Jacobs, senior US digital policy manager at the Institute for Strategic Dialogue, which was among the organizations that signed the letter.
“We need effective policies that specifically address the heightened risks to women, girls, and LGBTQ+ people.”
The platforms did not immediately respond to a request for comment ahead of the release of the letter.
“The world’s largest platforms have shown they are not equipped to handle the rise of AI-facilitated hate, harassment, and disinformation campaigns, including deepfakes and bots that can spew hate-based imagery at massive scale,” said Leanna Garfield, social media safety program manager at GLAAD.
The platforms “need to take concrete action now, so that everyone can feel safe online.”