Algorithmic Decisions and Their Human Consequences

Font Size:

A Federal Trade Commissioner urges the agency to take action to prevent bias in computer algorithms.

Font Size:

Bias is so ubiquitous that it has crept its way into artificial intelligence.

When computer models make biased inferences about people, what governmental entity should step in to correct the damage these biases cause?

These algorithmic harms are the Federal Trade Commission’s (FTC) problem, according to FTC Commissioner Rebecca Slaughter. In a recent article, Slaughter argues that the FTC should use a combination of old and new tools to protect consumers from algorithmic harms.

Although algorithmic decision-making promises increased accuracy, Slaughter contends that algorithmic decision-making consistently produces negative outcomes for marginalized groups and that the FTC should intervene to protect consumers in high-stakes interactions, such as hiring, health care, and education.

Slaughter claims that the resulting harms implicate both consumer protection and competition. Although bias itself is not new, Slaughter writes that artificial intelligence and algorithms pose new challenges and new dangers. She argues that artificial intelligence and algorithms have been operated as “black boxes,” with little outside knowledge of how artificial intelligence platforms and algorithms use inputs and make decisions. The resulting technology both obscures and amplifies bias, she argues.

Slaughter identifies multiple algorithmic harms.

Some algorithmic harms, according to Slaughter, stem from design flaws in unsophisticated algorithms. For example, Slaughter explains how Amazon trained a hiring algorithm using data from its largely male applicant pool. Because the algorithm’s data set skewed in favor of men, its hiring recommendations resulted in discrimination against women applicants. Slaughter refers to this as a “faulty input” problem, or, more colloquially, “garbage in, garbage out.”

Even when design flaws do not exist, sophisticated algorithms may engage in other algorithmic harms, such as “proxy discrimination,” which occurs when an algorithm substitutes a facially neutral characteristic for a protected class. Although a company may not notice proxy discrimination, Slaughter argues that companies should be legally liable without evidence of intent, particularly since companies intending to discriminate may use proxy discrimination to shield themselves.

Slaughter also identifies surveillance capitalism as a risk of increased algorithm and AI use. Surveillance capitalism is the commodification of consumer attention and data, which drives companies to use intrusive means to collect consumer data. Companies use algorithms to target consumer vulnerabilities and draw attention. Even worse, Slaughter claims, children are particularly at risk for having their data illegally collected and for exposure to disturbing content disguised as children’s media.

The harms caused by algorithmic decision-making reach beyond harm to individual consumers because these harms reinforce systemic biases throughout society and affect the entire economy, Slaughter claims.

All of the algorithmic harms Slaughter identifies either disproportionately impact or exclusively target disadvantaged groups, such as women and people of color. For example, a recent study revealed that an algorithm used in health care settings to identify high risk patients favored white patients, while reducing by more than half the number of Black patients identified for extra care, preventing Black patients from getting the care they needed.

Slaughter argues that, in addition to consumer protection concerns, the FTC has an interest in algorithmic harms because of antitrust implications. When companies use algorithmic decision-making to make decisions about pricing, a multitude of competition issues arise. If companies rely on their accumulated data to advertise and compete, new firms lacking data may be prevented from entering the market, Slaughter says. She claims that algorithmic decision-making may facilitate collusion between firms.

Slaughter contends that four of the FTC’s existing tools can help protect consumers from algorithmic harms. Under its general authority from Section 5 of the FTC Act, the FTC could take action against algorithmic harms and use a novel remedy known as algorithmic disgorgement.

Algorithmic disgorgement, which the FTC used for the first time earlier this year, requires a company to dispose of algorithms trained using “ill-gotten data.”

Slaughter says that the FTC could also target proxy discrimination by enforcing the Equal Credit Opportunity Act, which prohibits credit discrimination against protected classes. In addition, as the FTC is currently seeking comment on changes to the Children’s Online Privacy Protection Rule, Slaughter proposes changes to better protect children. The FTC could also use its power under Section 6(b) of the FTC Act to study algorithmic harms in depth.

The FTC’s existing power could still leave this area underregulated and consumers unprotected from unfair bias, worries Slaughter. She proposes that Congress should pass legislation targeted at algorithmic decision-making and a federal privacy law.  Furthermore, she says that the FTC should use its rulemaking powers under Section 18 of the FTC Act, which would allow the FTC to define what conduct violates existing law.

According to Slaughter, any solution to algorithmic harm must be based in transparency, fairness, and accountability. Transparency would target the black box surrounding the exact makeup of algorithms, allowing regulators to understand the exact causes of algorithmic harm and allowing consumers to understand better how companies use their information. Slaughter points to the success of the General Data Protection Regulation, a European rule which creates transparency obligations for companies using certain kinds of algorithms. Fairness, according to Slaughter, would require bans on discriminatory uses of algorithms. Accountability includes both requiring companies using algorithmic decision-making to audit and enforcing redress.

Ultimately, Slaughter recognizes the promise and the peril of algorithmic decision-making. Although she acknowledges that the FTC currently has authority to protect consumers from biased algorithmic decisions, she also urges Congress and the FTC to expand federal law to protect consumers.