
Scholar argues that laws should hold tech platforms liable for harms from children’s screen time.
Technology permeates the lives of children today. A report finds that 96 percent of teenagers use the internet daily, and approximately half are online “almost constantly.” But study after study reveals the negative impacts of excessive screen time on children, including mental health issues, social isolation, and cognitive development problems.
Given these harms, scholars and policymakers urge legislative action to regulate addictive technologies—in particular social media platforms—and reduce children’s screen time.
In a recent article, Gaia Bernstein, a professor at Seton Hall University School of Law, identifies two legislative models used to regulate addictive technologies: the Tech Liability Model and the Parent Gatekeeper Model. Bernstein urges legislatures to adopt the Tech Liability Model to reduce screen time for minors.
Bernstein explains that laws under the Tech Liability Model hold technology companies responsible for keeping children safe online. For example, the Kids Online Safety and Privacy Act imposes a duty for technology companies to “exercise reasonable care” when creating their products to protect minors from harm. Legislators implementing the Tech Liability Model may also enact provisions requiring technology platforms to carry out risk assessments, ban design features likely to prolong screen time for minors, or restrict children under a certain age from creating accounts, Bernstein notes.
On the other hand, Bernstein observes that laws following the Parent Gatekeeper Model hold parents responsible for their children’s online activity. Policies such as parental consent requirements and parental controls on screen time for social media platforms fall under this model, she explains. The state of Louisiana, for example, requires social media companies to allow parents to set time limits, schedule breaks, and monitor their child’s privacy settings.
In addition, Bernstein explains that certain “hybrid” laws combine aspects of both the Tech Liability and Parent Gatekeeper models. Examples include a Texas state law that bans addictive design features while also requiring online platforms to incorporate parental controls.
Bernstein argues that legislatures should adopt the Tech Liability Model. She suggests that the Parent Gatekeeper Model is flawed as it permits technology companies to evade responsibility for harms children face as a result of excessive screen time—harms the industry caused to begin with, according to Bernstein.
She argues that social media platforms implement addictive design features such as algorithms that expose children to hateful content and features based on the intermittent reward model—a psychological phenomenon by which the brain releases dopamine when people receive rewards on an inconsistent schedule. In the technology context, Bernstein explains, this model entices social media users to constantly “pull to refresh,” so they may be rewarded with a notification. These features prolong kids’ screen time, which allows technology companies to collect more user data and, in turn, generate revenue through more effective targeted advertising, Bernstein observes.
Faced with lawsuits over the impacts of these addictive designs on children, some platforms have introduced parental controls, along with “digital well-being tools,” to help users self-regulate their screen time, Bernstein notes. She maintains that technology companies have “warded off regulation” by showing that users possess sufficient tools to resolve their screen time-induced problems on their own. According to Bernstein, the Parent Gatekeeper Model similarly enables technology companies to evade responsibility by providing parents with tools to lower their children’s screen time.
Bernstein also argues that parents are ineffective at protecting their children from online harms. Parents are less likely to read the details of complex parental consent agreements, keep up with updates to parental controls, or constantly monitor their children’s online activity, she observes. Furthermore, Bernstein claims that parents who are fearful of isolating their children from peers on social networks will be reluctant to restrict their children’s time spent online.
Bernstein contends that the Parent Gatekeeper Model suffers from another vulnerability: It raises privacy concerns for children.
Bernstein rejects the common argument put forth by technology companies that laws imposing age verification mechanisms are the greatest threat to children’ privacy. Age verification systems have improved since their inception and can be further refined to eliminate privacy concerns, she observes. Moreover, she finds that age verification systems have proliferated in state law outside the context of regulating kids’ screen time.
Bernstein insists that the real threats to children’s autonomy and privacy stem from laws that grant parents access to their children’s online communications, especially about sensitive issues such as gender identity and political ideology. Bernstein maintains that such laws do not support the privacy rights that children—particularly older teenagers—possess against their parents.
In comparison, Bernstein argues that the Tech Liability Model better protects minors’ privacy rights. She explains that the Tech Liability Model restricts online platforms from implementing features that deliberately prolong screen time, limiting the amount of data these platforms can collect on children. Because the Tech Liability model both protects privacy rights and effectively regulates addictive technologies, Bernstein suggests that the perceived conflict between these two goals is “illusory.”
Bernstein, however, acknowledges that a hybrid model that balances robust legislation imposing direct liability on technology companies with options for parental controls may also be successful.
But Bernstein emphasizes that timing is crucial: If legislatures adopt the Parent Gatekeeper Model first, technology companies could successfully argue that additional legislation under the Tech Liability Model is unnecessary.
Consequently, Bernstein advocates having legislatures prioritize passing legislation incorporating the Tech Liability Model to reduce children’s screen time and eliminate the need for subsequent parental gatekeeping legislation. Bernstein also urges legislatures that select a hybrid model to ensure that the parental gatekeeping laws they enact minimize privacy risks and maximize the efficacy of complementary laws under the Tech Liability Model.


