
Scholar argues for an approach to regulating artificial intelligence centered on mental health.
A study conducted by the American Psychological Association (APA) reveals that 41 percent of teenagers who report excessive social media use suffer from poor mental health, compared to 23 percent of teens who use social media less frequently. These data underscore a growing concern among scholars and policymakers about the intersection of mental health and social media technology driven by artificial intelligence (AI).
In a recent book chapter, Przemysław Pałka, an assistant professor of law at the Jagiellonian University analyzes the European Union’s Artificial Intelligence Act (AIA), which provides the EU’s regulatory framework for AI, and its potential impact on mental health. Citing empirical research that links various AI-powered products such as social media, chatbots, and video games to adverse effects on the mental health of users, Pałka underscores the urgency of considering psychological harm in AI legislation.
Pałka cites two key motivations for focusing on mental health in the AIA. First, he notes that the AIA references “psychological harm” when condemning certain uses of AI, signaling that the drafters intended to focus on, or at least consider, mental health in the Act. Second, Pałka notes that the AIA—the first regulation of AI that spans multiple sectors and industries—may serve as a template for other governments seeking to regulate the technology. He urges lawmakers and scholars to scrutinize its provisions closely to ensure that the framework is ironclad.
To provide context, Pałka first explains that the AIA is a component of the EU’s broader product safety legislation. The AIA, according to Pałka, adopts a scaled regulatory framework that categorizes AI use based on four levels of risk: unacceptable, high, limited, and minimal.
Most regulatory attention, he explains, is devoted to high-risk systems such as biometric identification and law enforcement tools. Pałka notes, however, that many AI systems with significant consumer impact—such as those used in content moderation, advertising, and price discrimination—are excluded from these high-risk classifications and left unregulated. And, even if an AI service falls in or above a “high risk” category, Pałka warns that the AIA intervenes under narrow circumstances only.
Prohibited uses of AI are outlined under Article 5, Pałka explains. The Act, he notes, restricts AI programs that use subliminal techniques to distort behavior or exploit user vulnerabilities related to age or disability. Pałka emphasizes, however, that a service provider is only liable if it actually caused demonstrable psychological harm to users. Penalties for violations, such as fines of up to €30 million or 6 percent of annual turnover are all but useless, Pałka argues, because the Act critically fails to define what “psychological harm” actually is.
Making matters worse, Pałka claims that there is little clinical consensus on the definition of psychological harm for the AIA drafters to reference. Some scholars, according to Pałka, equate psychological harm with severe emotional distress or trauma. Others claim that psychological harm includes emotions or conditions such as fear, sadness, or addiction. Pałka warns that without clear guidance from the AIA, the burden of defining psychological harm may shift to private entities with financial incentives to downplay its significance.
Pałka suggests that policymakers should focus instead on preventative mental health protection, rather than struggle to define the scope or meaning of post-harm psychological damage. That is, policymakers should adopt a standard of “good mental health,” and regulate technology that could cause harm or contribute to the development of a psychological problem.
Pałka adds that mainstream psychiatrists offer a clear clinical definition of “good mental health.” He cites the widely accepted World Health Organization’s definition of “good mental health” as the “ability to cope with stresses, work fruitfully, and contribute to one’s community.” Risks to mental health, according to Pałka, include all interactions that increase the risk of developing a disorder. Reduced ability to cope with stress, decreased work productivity, or interference with community contribution are all examples Pałka cites that signal declining mental health, or the potential development of a disorder.
If policymakers adopt this alternative standard, Pałka argues that companies producing algorithms proven to reduce cognitive function—such as applications with addictive designs meant to keep users endlessly engaged—would be penalized or prohibited from consumer distribution before the applications have the chance to cause real harm. In short, Pałka argues that the “mental health” standard would give the AIA sharper teeth to better protect users early on.
Pałka acknowledges, however, that transforming the current AIA framework will be tremendously difficult. He offers alternative—although admittedly imperfect—solutions that policymakers could adopt that remain within the current standard.
First, Pałka explains that the AIA could simply broaden the categories of high-risk AI systems to include applications such as content moderation, advertising, and price discrimination.
Alternatively, Pałka suggests that policymakers could tailor AI restrictions to the severity of the potential psychological harm caused by an AI system—if they can define it. For example, Pałka argues that AI systems that contribute to eating disorders or self-harm should be subject to stricter regulation than those that cause internet addiction, which could be mitigated through age restrictions or mandatory warnings used with cigarettes and alcohol.
Finally, Pałka questions whether AI-specific regulation is the best method of policymaking. He argues that perhaps general tort or consumer law would be more effective. Pałka warns, however, that legal precedent has proven limitations. Existing caselaw, according to Pałka, offers abundant analysis on physical harm, but mental harm—until recently—was treated with less seriousness and plagued with societal taboo.
Although the AIA represents an important step toward regulating AI, Pałka argues that its current provisions fall short in addressing critical risks to consumer mental health. In failing to define psychological harm, by excluding key AI applications from high-risk classifications, and in relying on ambiguous definitions of key provisions, Pałka argues, the AIA leaves significant gaps in its regulatory framework. He calls for policymakers to seize the opportunity—in a moment when public demand for AI regulation is loud and clear—to craft more effective and comprehensive solutions.