Experts call for policies to govern self-harm monitoring technology employed by schools.
Self-harm is the second leading cause of death among 10- to 24-year-olds in the United States. During the COVID-19 pandemic, educators were concerned about their inability to identify students at risk of self-harm without in-person interactions at school. To address this concern, thousands of school districts across the United States purchased monitoring software to flag digital activity containing self-harm content.
But this software is far from perfect. It may not be able to differentiate a student’s academic search about the late poet, Sylvia Path, who died by suicide, from the personal search activity of a student with suicidal ideation.
Self-harm monitoring software programs have not proven their ability to protect students, explain Sara Collins of Public Knowledge and her coauthors in a recent report released by the Future of Privacy Forum. Collins and her coauthors argue that absent careful regulation, this software may have unintended negative consequences for student privacy, equity, and mental health.
“Absent other support, simply identifying students who may be at risk of self-harm—if the system does so correctly— will, at best, lead to no results,” and at worse, could trigger a harmful response, warn Collins and her coauthors.
Collins and her coauthors analogize self-harm monitoring software to the reviewing, flagging, and alerting done by credit card companies for suspicious transactions. Self-harm monitoring software used by schools works by scanning student activity across school-issued devices including email, social media, documents, and other online communications. The software then either keeps a record of all activity for school administrators to search through or only collects flagged activities. When the software identifies a self-harm risk, it reports the activity to school employees, and, in some cases, the software alerts third parties such as law enforcement.
This practice of surveillance and disclosure of student information has various legal implications, explain Collins and her coauthors. At the federal level, the Children’s Internet Protection Act (CIPA) requires schools that receive certain federal funding to monitor the activity of minors on any school-issued devices to protect them from accessing obscene material.
Collins and her coauthors explain that school districts have unevenly interpreted the extent of the monitoring required by CIPA and how the surveillance requirements might interact with other federal laws, such as Family Educational Rights and Privacy Act (FERPA), which requires schools to prevent the disclosure of student records without parental consent. The Federal Communications Commission has not released any guidance about CIPA or FERPA and the use of monitoring.
This gap in guidance has serious privacy implications, according to Collins and her coauthors. “Without clarity on CIPA’s requirements, schools may unintentionally over-surveil and over-collect sensitive, personal information about students or their families in an attempt to comply with the law,” argue Collins and her coauthors. Some state-level laws may also require filtering to meet cyberbullying statutes, further complicating the regulatory landscape.
This type of surveillance and disclosure also presents equity concerns related to disability discrimination, Collins and her coauthors explain. All schools must comply with the Americans with Disabilities Act (ADA). Furthermore, schools that receive federal funding—which is virtually all public schools—must also comply with Section 504 of the Rehabilitation Act. Both laws protect individuals with disabilities and perceived disabilities. The ADA defines a disability as “a physical or mental impairment that substantially limits one or more major life activities of such individual.”
Collins and her coauthors argue that flagging a student for self-harm would meet the definition criteria of a perceived disability. Accordingly, when a school flags a student as at-risk for a mental health issue that implicates their safety, schools might be legally obligated to provide privacy and non-discrimination protections for that student.
Collins and her coauthors also express concerns that this policy may violate Title IX protections against discrimination based on gender identity and sexual orientation. Title IX prohibits discrimination against individuals based on their sex or gender. Some monitoring software flags content such as “gay,” “lesbian,” and “queer” as a risk factor. LGBTQ+ students may be harmed by this data collection Collins and her coauthors warn.
In one study, experts found that less than half of LGBTQ+ youth shared their sexual orientation with school staff. Collins and her coauthors also explain that LGBTQ+ youth are more likely than their peers to see information and resources on the internet about their identity than their non-LGBTQ+ peers. Collins and her coauthors recommend that school districts carefully craft policies related to LGBTQ+ identities. Specifically, Collins and her coauthors recommend that these policies should limit how long schools retain records and should limit disclosures to prevent LGBTQ+ students from being marginalized and private information shared.
Overall, Collins and her coauthors bring attention to the potential dangers from software meant to save lives. Collins and her coauthors recommend a balanced approach to self-harm monitoring. They advocate having stakeholders from across school communities contribute to the development of guidance that interprets the various legal requirements implicated by this new software.
Without robust mental health resources, careful implementation, and compliance with anti-discrimination laws, self-harm monitoring software may do more harm than good, conclude Collins and her coauthors.