
Restrictions on AI use may violate the First Amendment because they regulate speech based on its content.
Throughout the United States, state and federal lawmakers have proposed more than a thousand bills constraining the use of artificial intelligence (AI) this year. These restrictions include limitations on AI use at state universities, labeling requirements, and bans on various uses by federal agencies.
But are these restrictions legal?
In Ward v. Rock Against Racism, the U.S. Supreme Court established that to avoid violating the First Amendment of the U.S. Constitution, government speech restrictions must be “content neutral, serve a significant government interest, be narrowly tailored to serve that interest, and leave open ample alternative channels of communication.” Such lawful speech regulations are commonly referred to as “time, place, and manner” restrictions, which stand in contrast to proscribed viewpoint and subject-matter restrictions.
As more policymakers seek to restrict, limit, or otherwise regulate AI use at the state and federal level, the question of whether a regulation of AI-generated content constitutes a time, place, or manner restriction or instead a viewpoint or subject-matter restriction is paramount. If AI restrictions are the former, they are lawful; if they are the latter, they run afoul of Supreme Court precedent and should be—and are likely to be—struck down by the courts.
Perhaps the best argument for categorizing AI generation regulations as time, place, and manner restrictions is that they simply alter the presentation of the AI user’s idea. They do not prevent the expression of the idea itself. The ability to express, without government impediment or intrusion, one’s views is undoubtedly a key goal of the First Amendment. And, if taken from the sole perspective of trying to protect this goal, as opposed to fully considering the protections provided by the First Amendment, government regulation of AI use may seem reasonable.
This argument is not without its problems. One example is proscribing the use of AI to aid individuals who would otherwise be unable to express their views, leaving them without another effective avenue of expression. AI could also be used to refine an individual’s views and their presentation, increasing their understandability and even their ability to get accepted and disseminated by a publication. However, these problems are not the most pronounced First Amendment ones with AI use restrictions.
AI content has a strong Supreme Court precedent to rely upon for protection. In Police Department of Chicago v. Mosley, the Court held that “the First Amendment means that government has no power to restrict expression because of its message, its ideas, its subject matter, or its content.” What is notable about this sentence is the separation of speech into four components: message, ideas, subject matter, and content. Note that although they are similar, content is a broader term than subject matter, encompassing things that might not be considered part of subject matter. Even if one might argue that AI content could be removed without proscribing ideas or subject matter and by only requiring an alteration of the message, there is no way around the final category: content.
Content-based speech regulations are presumed to be unconstitutional, but they are allowed under narrow exceptions. For example, speech that is a so-called true threat, a “statement that frightens or intimidates” listeners into thinking the speaker is about to injure them, can be proscribed. Obscene speech can also be proscribed based upon its content. Although it is possible that some AI speech could fall under one of these narrow exceptions, any argument for broadly classifying it in this way is absurd.
One last area that bears special consideration is the regulation of AI in primary and secondary public schools, at which AI use is becoming increasingly prevalent. In Tinker v. Des Moines Independent Community School District, the Supreme Court ruled that students do not “shed their constitutional rights to freedom of speech or expression at the schoolhouse gate,” limiting restrictions on First Amendment rights to preventing significant disruptions or the impairment of others’ rights. In Hazelwood School District v. Kuhlmeier, however, the Court opened the door for allowing speech restrictions of “school-sponsored expressive activities” that “are reasonably related to legitimate pedagogical concerns.”
Notably, these cases dealt with restrictions in high schools, dealing with minors’—diminished—rights and an environment where attendance is required, not voluntary, making them different from other environments where participation is voluntary and participants are of legal age. Irrespective of this, it appears that some restrictions on AI use within educational environments may be allowable, but they must be directly related to educational pedagogy. Policies that extend beyond this limited exception are likely to be unconstitutional.
As a result, even though students may not be able to use AI to answer the questions on their math test, many other governmental regulations of AI use may be on shaky legal footing.



