How the FTC Could Regulate Algorithmic Discrimination

Font Size:

Scholars argue that the FTC can apply its existing authority to combat discriminatory AI.

Font Size:

New regulatory problems often require fresh laws and agency mandates. But one regulator may already have all the authority it needs to confront one of the greatest problems of the artificial intelligence (AI) age.

In a recent article, Andrew Selbst of UCLA Law and Solon Barocas of Microsoft argue that the Federal Trade Commission (FTC) has unique strengths that it could wield to make AI less discriminatory.

Crucially, the FTC could take bold action against AI-based discrimination under its existing legal authority, without any additional congressional action, Selbst and Barocas contend.

The problem of AI-based discrimination has many possible solutions, according to Selbst and Barocas. Sometimes, businesses in domains regulated by civil rights laws—credit and housing, for example—discriminate, and individuals can sue them under those laws

But some cases of AI-based discrimination fall outside the purview of anti-discrimination law, Selbst and Barocas explain. They offer the example of a mobile phone whose facial recognition software systematically performs worse on individuals from certain demographic groups. For these more diverse and widespread forms of AI-based discrimination, Selbst and Barocas suggest that individuals require legal remedies beyond anti-discrimination law.

Specifically, Selbst and Barocas argue that the FTC’s existing authority under Section 5 of the FTC Act to curb unfair trade practices is well-suited to the task of preventing businesses from using discriminatory AI. They write that Section 5 can both “replicate the successes of civil rights laws” and have an even broader reach than those laws.

Section 5 makes it illegal for businesses to practice unfair acts in commerce, so long as those acts are likely to injure consumers substantially, among other factors. Under this provision, the FTC can find that a discriminatory AI tool significantly harms consumers, Selbst and Barocas note.

They endorse a broader definition of “substantial injury” that includes harms to the “social standing of a group of consumers overall.” If an AI tool is biased against a demographic group, the FTC can label its use an unfair practice, because the tool offends the dignity of an entire social group, which Selbst and Barocas claim are “substantial” offenses under Section 5.

But a business using a discriminatory AI tool could defend itself by arguing that the tool provides consumer benefits that outweigh its harms. Indeed, Section 5 requires this type of cost-benefit analysis.

Selbst and Barocas counter that the FTC has historically engaged in relatively simple cost-benefit analyses in Section 5 enforcement actions. They note, though, that when the FTC has cracked down on discrimination in particular, it requires that the business’s costs are “strictly necessary to achieve the benefits.”

If the FTC can prove that the overall benefit of a product does not excuse the discriminatory harms posed by particular features of that product, then the agency could likely satisfy that test, Selbst and Barocas suggest.

Not only does the FTC have existing authority to tackle discriminatory AI, but it also has comparative advantages over both other forms of legal action and other regulatory agencies, Selbst and Barocas contend.

They note that anti-discrimination laws limit plaintiffs to suing specific categories of decision-makers. For example, federal employment law applies to employers, employment agencies, and labor organizations, but not to vendors of assessments used to determine eligibility for certain specific jobs. Selbst and Barocas claim that these distinctions close off certain avenues of legal redress to consumers.

In contrast, the FTC could take action against any actor that uses AI toward discriminatory ends, Selbst and Barocas argue. These actors include both vendors and clients, plus entities that anti-discrimination laws already cover, such as employers, credit institutions, and landlords.

An FTC action also dodges other common barriers in anti-discrimination suits, such as the need to form a class action, Selbst and Barocas note. And under Section 5, the FTC must only prove likelihood of injury, whereas plaintiffs in federal court have to allege “concrete” and “particularized” harm.

Selbst and Barocas further argue that the FTC is better positioned than other regulatory bodies to handle claims against discriminatory AI. For example, the FTC can investigate, take action against businesses in its own tribunals, sue in federal court, and issue rules.

In fact, the FTC has taken a recent interest in tackling discriminatory AI, Selbst and Barocas observe. In the span of a few months last year, the FTC announced a proposed rule intended to protect data security and address algorithmic discrimination and also argued for the first time in an enforcement action that the FTC Act outlaws some forms of discrimination.

By comparison, the Equal Employment Opportunity Commission can neither issue rules nor bring internal enforcement actions for private sector claims, Selbst and Barocas explain. They also emphasize that the U.S. Department of Housing and Urban Development often has to initiate costly litigation to crack down on housing discrimination.

But for the FTC to succeed in impeding AI-based discrimination, the agency will have to build on its recent rulemaking and enforcement efforts, Selbst and Barocas argue.

The FTC will also have to overcome some potential obstacles, Selbst and Barocas caution. For example, they highlight the dissent of two FTC commissioners in an enforcement action in which the agency held that a car dealership group’s racially discriminatory fees and borrowing costs violated Section 5. Selbst and Barocas predict that pro-business organizations, seizing on that dissent, will challenge FTC enforcement against discriminatory AI in court.

If the FTC can overcome such obstacles, it could fill some gaps in civil rights laws and take a more flexible and comprehensive approach to combatting AI-based discrimination, Selbst and Barocas conclude.