Does the First Amendment Protect AI Generated Speech?

Font Size:

Regulating artificial intelligence disinformation could test the First Amendment’s limits.

Font Size:

This election year, as Americans on different sides of political issues shout past one another, the “voices” of artificial intelligence programs could have a great say in who gets elected to federal office.

Advances in generative artificial intelligence, experts warn, could unleash a wave of misinformation and propaganda this coming U.S. presidential election cycle, threatening American democracy. It may seem self-evident that the U.S. Congress has the mandate to address the problem and, in fact, several national lawmakers have proposed bills to combat false or misleading AI-generated “speech.” But the First Amendment might stop these lawmakers in their tracks.

The First Amendment prohibits the government from “abridging the freedom of speech.” “Speech,” as the U.S. Supreme Court has interpreted the term, refers not just to the written or spoken word, but also to art, films, parades, and other forms of expression. Until now, courts have applied the free speech clause to forbid government restrictions on human expression.

But given that the First Amendment protects speech generally, rather than speakers, there is no textual basis for applying different rules depending on the source of speech, natural or artificial.

Indeed, much of the Supreme Court’s First Amendment doctrine treats speech abstractly. The First Amendment, on this view, protects the exchange of ideas in public discourse—or listeners’ access to information—independently of anyone’s right to speak.

In Red Lion Broadcasting v. FCC, for instance, the Supreme Court affirmed the public’s right of  “suitable access” to a variety of social, political, aesthetic, and other information. To this end, the Court explained, the First Amendment’s purpose is to “preserve an uninhibited marketplace of ideas,” where different viewpoints compete for acceptance and the “truth will ultimately prevail.”

These principles have led lower courts to strike down laws regulating the information that enters public discourse. For instance, the 2014 federal appellate court decision Care Committee v. Arenson invalidated a Minnesota law that restricted false or misleading speech by politicians in their campaign materials. The court reasoned that citizens, not the government, are suited to judge the veracity of the content and source of political speech, especially in light of “counterspeech” expressing opposing viewpoints.

And based on these same principles, some scholars argue that the public has a right to access AI-generated content that listeners deem relevant to their political or moral decisions. This right, according to these scholars, extends to completely autonomous AI outputs that humans have not influenced in any meaningful way.

So far as AI “speakers” provide information to human listeners, according to these scholars, the contributions AI makes to public discourse should receive the same First Amendment protection as human speech. It is not for the government to decide what ideas—or what sources—the public hears.

It follows, these scholars contend, that laws banning AI-generated speech based on its content, especially in the political arena, are presumptively unconstitutional. For instance, they assert that the government could not enact a law restricting AI-generated articles or social media posts that deny climate change or advocate an uprising against government.

Some commentators, however, argue that unrestrained AI-generated speech could be catastrophic to democracy. They emphasize the many ways generative AI is poised to transform the information ecosystem, challenging the assumption that citizens can distinguish quality information from junk and truth from falsity.

This assumption fails, some experts say, when public discourse is saturated with conspiracy theories, junk science, or highly realistic “deepfake” videos, all masquerading as quality information from reputable sources. Although lies and propaganda are hardly new to political discourse, these experts argue, generative AI programs such as ChatGPT and Midjourney can make misinformation campaigns much more efficient and effective, as well as harder to detect.

Generative AI’s potential for deception is clear from events within the past year. For example, high-quality fake images and videos of Presidents Joseph R. Biden and Donald J. Trump have circulated in apparent smear campaigns. In an incident with real-world impacts, an image reportedly generated by AI of an explosion near the Pentagon went viral, sending the stock market into a brief fall. These types of incidents could become much more common, some researchers suggest.

More subtly, news sites and social media accounts populated entirely by AI-generated content could sway political discourse with deceptively well-reasoned narratives supported by fake photo and video evidence, researchers report.

In addition, AI bots increasingly can target specific listeners. In learning the patterns in the speech and behavior of an individual or group, a bot can effectively imitate anyone, as well as calibrate its message to the audience’s unique susceptibilities.

Given this ability to customize speech for a given audience, generative AI systems trained to spout certain ideological views could reinforce political echo chambers and worsen partisan biases, researchers suggest. They warn that generative AI could erode any remaining common ground among citizens of different ideological persuasions.

Fact-checking efforts and “counterspeech” meant to neutralize misinformation and propaganda may be overwhelmed by the speed and scale at which AI-generated content could propagate, experts argue. In the long term, a diffusion of deceptive content and an inability to identify it as such could undermine people’s sense of reality and confidence in the political system.

It is difficult to predict what effect generative AI will have on the elections this year and beyond. In any event, legislators seeking to protect the “marketplace of ideas” may need to contend with precedents, such as Care Committee, that assume an alert citizenry and some amount of accurate, honest information in public discourse can counteract false political speech.

There may, however, be a middle ground between prohibiting generative AI from contributing to public discourse and giving it free rein: labeling requirements.

The AI Labeling Act, which U.S. Senators Brian Schatz (D-Hawaii) and John Kennedy (R-La.) introduced last October, would require AI-created images, video, and other media to carry a disclosure of their AI source. According to Senator Schatz, even if such a labeling requirement cannot guarantee a marketplace of ideas in which the truth will prevail, it may prevent a total marketplace failure, while preserving the public’s right to information.