Mitigating Algorithmic Harms

Font Size:

Scholars discuss how regulation could shape the impact of data privacy and technology on marginalized communities.

Font Size:

Technology continues to permeate societal structures at a rapid pace. Even as exciting advancements prompt questions about future frontiers, a dark side of tech has emerged as a threat to equity, privacy, and civil rights.

Machine learning, artificial intelligence (AI), and websites and apps that collect consumer data present many potential avenues for discriminatory outcomes. The unprecedented growth of this technology poses challenges for regulators in their attempts to keep the law current. The absence of federal privacy laws disproportionately impacts marginalized communities, who suffer the greatest harms from overly invasive technology.

The lengthy history of surveillance of communities of color lends context to the discussion on data privacy and civil rights. What initially began as government efforts to track civil rights leaders now takes the form of digital surveillance through technology. U.S. Immigration and Customs Enforcement (ICE), for example, employed a data analytics company to build profiles for and track undocumented immigrants. Police precincts are engaging AI algorithms to develop “predictive policing” systems likely to increase over-policing in majority-minority neighborhoods.

Beyond surveillance concerns, discrimination due to emergent technologies has trickled into the housing and employment spheres. Automated decision-making systems that assess tenants or job applicants often deny applicants of color on the basis of biased algorithms. Because programmers create technology such as AI and machine learning, their biases are easily embedded into what should be a neutral process for creating outcomes.

With so much potential to perpetuate discrimination, what can be done to curb technology’s hidden harms, and how can regulators respond proactively? The Federal Trade Commission is laying the groundwork for rulemaking on commercial surveillance and data security, which scholars hope will bolster data collection protections for marginalized communities. The proposed American Data Privacy and Protection Act could be another powerful step forward in solidifying privacy as a civil right for all, as the bill would set requirements for AI bias testing and limits the amount of user data companies can collect.

In this week’s Saturday Seminar, scholars discuss the pressing need for federal privacy protections for marginalized communities.

  • In a report released by the Brookings Institution, Samantha Lai and Brooke Tanner of the Brookings Institution explain that, in the modern technological landscape, interested parties can access and monetize personal data at an “unprecedented scale” due to limited regulatory safeguards. Privacy violations can lead to discrimination, exclusion, and physical danger for marginalized groups, argue Lai and Tanner. They support the implementation of comprehensive federal privacy legislation to protect individuals seeking reproductive care, whose online activity may otherwise be used to justify abortion convictions. In addition, Lai and Tanner contend that the U.S. Congress should address other data collection policies, including those that enable discriminatory online advertising and racially biased police surveillance.
  • In a report by the United Nations High Commissioner for Human Rights, experts discuss how artificial intelligence can infringe upon the right to privacy and exert “catastrophic” effects on human rights if proper safeguards are not implemented. Because artificial intelligence relies on large data sets, businesses are motivated to engage in widespread data collection and monetization, explain the experts. The experts note that businesses typically carry out these monetized data transactions without public scrutiny, and they are insufficiently overseen by existing legal frameworks. The complex decision-making processes of artificial intelligence systems allows them to identify patterns that are difficult, if not impossible, for humans to explain, thus thwarting typical means of ensuring effective accountability when those systems cause harm, argue the experts. The experts suggest that one key element to addressing the global data environment is the establishment of independent data privacy oversight bodies with effective enforcement powers.
  • LGBTQ+ communities face increasing data privacy risks that warrant heightened attention, argue Chris Wood of LGBT Tech and several coauthors in a report cosponsored by The Future of Privacy Forum and LGBT Tech. In the absence of a comprehensive privacy law, federal laws that apply to specific industries create protections for LGBTQ+ communities, such as the Health Insurance Portability and Accountability Act and the Family Educational Rights and Privacy Act. Although supplemented by self-regulatory frameworks, U.S. state frameworks, international regulations, and anti-discrimination laws, these safeguards prove insufficient to regulate the collection and use of information about sexual orientation and gender identity, according to Wood and his coauthors. The Wood team urges organizations to treat such data with heightened sensitivity, standardize processes to inventory and categorize data, consider de-identification mechanisms, and provide support for novel anti-discrimination efforts.
  • In an article published in the Health and Human Rights Journal, Sharifah Sekalala of the University of Warwick and several coauthors explore the human rights dimensions of new surveillance technologies that have emerged during the COVID-19 pandemic. Although pandemic surveillance has served the important purpose of controlling the spread of the virus, Sekalala and her coauthors argue that such technology employs personally identifiable and sensitive public health data and raises rights-based concerns. Sekalala and her coauthors suggest that states conduct risk assessments to ensure evidence-based decision-making and add a sunset clause—a provision that self terminates after a fixed period—to any laws that allow digital public health surveillance.
  • In a report issued by New America, Christine Bannan and Margerite Blase, formerly of New America’s Open Technology Institute, examine the increasing use and associated harms of algorithmic tools in criminal justice, education, and employment decisions. Even when programmers do not intend to incorporate bias into their programs, argue Bannan and Blase, the sensitive data that AI systems store related to race and gender often results in discriminatory harms. To address these harms and create a series of standards, Bannan and Blase call for legislation mandating greater transparency in building algorithmic systems, as well as periodic impact assessments and audits to assess any anticipated harms or flaws in the software.
  • Mass surveillance is on the rise, argues Jay Stanley in a report released by the American Civil Liberties Union. Stanley focuses on a private security camera company called Flock, which allows consumers to create their own lists of suspicious vehicles that pass by their cameras. Flock compiles the data into its centralized database, which Stanley warns could mirror surveillance efforts that detained and deported undocumented immigrants. Despite being a private company, Flock works in tandem with law enforcement agencies without community approval, resulting in concerns from marginalized communities about over-policing. Stanley argues that rule makers should enact strict parameters for private companies working with law enforcement.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.