Is AI-Facilitated Gender-Based Violence the Next Pandemic?

Font Size:

The rise of deep fakes and other AI-generated misinformation presents a direct threat to women’s freedom.

Font Size:

The rise of gender-based online violence amounts to a direct attack on women’s freedom of expression around the world, especially the freedom of women journalists and human rights defenders. The consequences? They include de-platforming women’s voices, undermining equal access to the digital public space, and creating a chilling effect on democratic deliberations with disproportionate impact on women journalists and women human rights defenders.

This year, with over 64 elections globally, it has never been so easy to produce false videos, audio, and text through content that deep-learning AI has generated and synthesized. During the last Slovak parliamentary election campaign, for example, deepfakes were used for the first time, spreading a fake video featuring journalist Monika Tódová and party chairman Michal Šimečka. Despite being fabricated, the video still reached thousands of social media users just two days before the election. This use of artificial intelligence technology to discredit a journalist and undermine election integrity—the first such instance in the European Union—offered a worrying example of the future use of so-called deepfakes.

If deepfakes pose a direct threat to information integrity, they also undermine women’s voices. The virality of deepfake images of Taylor Swift, seen over 45 million times on social media last January, revealed the potentially huge impact of new technology on women’s online safety and integrity. One study has found that 98 percent of deepfake videos online are pornographic and that 99 percent of those targeted are women or girls.

In addition, 73 percent of women journalists have faced online harassment, according to a 2020 UNESCO report. Among those targeted, 20 percent experienced online attacks in direct connection with their online harassment. Gender-based disinformation is a key component of these attacks aimed at discrediting these journalists, and such content often goes viral. MIT researchers concluded that “falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude.” They found specifically that falsehoods travels six times faster than truths.

Online gender-based violence is the flip side of digital authoritarianism. Indeed, one of the authors of this essay, an expert on the treaty body to the Convention on the Elimination of Discrimination against Women (CEDAW), has argued elsewhere that digitized violence is the “newest category of gender-based violence” and has called upon governments to address coded gender violence, especially in regard to the safety of women human rights defenders and women journalists, including online journalists.  This rise of online gender-based violence is deeply intertwined with the rise of digital authoritarianism. Authoritarian anti-democratic states can employ anti-feminist narratives and policy measures to justify the oppression of marginalized groups.

In the Philippines, for example, Nobel Peace Prize winner Maria Ressa has decried online harassment as death by a thousand cuts,” stating that nothing had prepared her for the dehumanizing storm of gendered online violence directed at her over a half a decade. At one point, Ressa received more than 90 hate messages per hour on Facebook. Over the last few years, Ressa has focused on the responsibility of social media platforms, which monetize hate speech and misogyny, as documented by She Persisted.

In the face of an explosion of online violence against women, especially in the context of litigation, justice departments, including the U.S. Department of Justice need to address the panoply of digital attacks, including doxing, deepfakes, and other forms of misogynistic abuse that create intimidating tactics in the process of a fair trial.

In the last 10 years, there has been a shift toward better protections for victims of non-consensual pornography. When this problem first arose, those targeted had no legal protections.

So far, legislation has been focused on AI or gender. But it should also explore interconnections between both. The EU AI Act, for example, is the first global AI regulation, while the EU has also separately developed new rules to combat gender-based violence, criminalizing physical violence, as well as psychological, economic and sexual violence against women across the EU, both offline and online. These rules are an important part of a global gender equality strategy approach. But they should also be combined with legislation aimed specifically at AI-created abuses as well.

Tech companies must comply with international human rights standards. In a recent virtual summit on deepfake abuse, civil society organizations strategized responses, starting with the need to agree on a definition of deepfakes. One solution, for instance, would be to consider deepfakes as a violation of consent and to require developers to remove deepfake content from their data training. Search engines and AI developers could also put resources into mitigating the ability for users to access and distribute such content.

The Global Network Initiative, a stakeholder group convening civil society organizations and private tech companies, including Meta and Microsoft, has called for companies to respect and promote freedom of expression and comply with internationally recognized human rights, including the rights set out in the International Convention on Civil and Political Rights (ICCPR). Furthermore, the Initiative has stated that the scope of article 19(3) of the ICCPR must be read within the context of further interpretations issued by international human rights bodies, including the Human Rights Committee and the Special Rapporteur on the promotion and protection of the rights to freedom of opinion and expression.

But the ICCPR alone is not enough to challenge a gendered form of violence online. There is a need to enforce the women’s rights convention—the Convention on the Elimination of Discrimination against Women—together with the ICCPR. Protecting tech whistleblowers is another needed step toward addressing these digital attacks and holding big tech accountable, as stated by the Signals Network.

Online platforms also need to build AI resilience. Mitigating gender-based violence from the outset and implementing safety by design are necessary tools to build digital resilience. PEN America has developed concrete recommendations for social media platforms to mitigate the impact of online abuse without undermining freedom of expression.

Civil society organizations also recommend labeling deepfakes and red-teaming before launching any product. Reporters Without Borders calls on social media platforms to hire more information professionals to supervise during the training phase of large language models. Content generated by large language models in the training phase must be verified by media and information professionals instead of simply being evaluated on the basis of its plausibility. Reinforcement learning through human reviewers who can rate a language model’s output for accuracy, toxicity, or other attributes is another important mitigation tool.

Another solution is to implement crisis mechanisms at scale for journalists and human rights defenders. Today, when journalists and human rights defenders face severe online abuse, they often try to escalate their cases on social media platforms or with their employers or civil society organizations. But these escalation channels rely on personal connections, and recent tech platform staff turnover and reorganization make them unpredictable. Civil society organizations that support women’s rights need more reliable, efficient, timely, and structured escalation channels. Indeed, many of those organizations petitioned the UN for such reforms.

For women journalists, building digital awareness is a priority. Online content from creators such as the Digital Dada Podcast, which raises awareness on digital literacy and gender based violence in Kenya and across East Africa, should be scaled up among the community of journalists worldwide. Journalists could implement training, including on open-source intelligence methods and other specific measures to detect deepfakes or make photos harder to process for deepfake creation. For example, learning how to add noise to images or use filters that make slight edits can also prevent the use of deepfakes.

At the end of the day, policymakers need to raise awareness about the linkages between anti-feminism, democratic backsliding, and digital-authoritarianism. New developments in domestic and international norms must take into consideration these intertwined threats.

Elodie Vialle

Elodie Vialle is a Senior Advisor to PEN America and a journalist working at the intersection of journalism, technology, and human rights.