Governing Algorithmic Discrimination

Scholar argues that anti-discrimination law alone cannot address bias hidden in algorithms.

A job applicant’s first interviewer today may be a machine. Employers now rely on artificial intelligence (AI) to screen candidates, monitor employees, and decide who earns promotions.

In a recent article, Pauline Kim, a professor at Washington University in St. Louis School of Law, explores how AI tools test the limits of current anti-discrimination law. She argues that ensuring fairness in algorithmic decision-making will require combining traditional anti-discrimination protections with broader regulatory systems for overseeing algorithms.

Kim notes that employers now deploy AI across numerous stages of the application process. Algorithms target job advertisements, screen resumes, and even evaluate candidates’ facial expressions or tone of voice during interviews. Kim refers to these tools as predictive algorithms and explains that such tools use large datasets to make predictions about a worker’s performance or fit.

AI tools may worsen the same inequalities they are employed to eliminate, despite being marketed as neutral and objective, Kim argues. Because predictive algorithms are designed to recognize patterns, they may reproduce past patterns of discrimination even when employers do not intend such results, Kim explains. For example, if a company’s historical data reflect existing gender or racial disparities, an algorithm trained on that data may favor applicants who resemble the firm’s current workforce. Kim notes that Amazon encountered this problem when its screening model rated applicants from women’s colleges lower because its training data reflected a largely male workforce.

Kim explains that discriminatory algorithms are difficult to regulate for several reasons. Because automated systems evaluate people through large-scale pattern recognition, the resulting harms fall on entire groups rather than on identifiable individuals, Kim notes. She argues that this group-level impact poses a challenge for laws that were designed to remedy personal acts of bias rather than structural inequities.

Algorithms also analyze and combine numerous data points in a way that can be difficult to untangle, making it challenging to determine whether a protected trait, such as race or gender, plays a role in a particular outcome, Kim argues. She explains that responsibility is further blurred when third-party vendors design and operate AI tools on behalf of employers.

Whether a predictive algorithm is discriminatory depends on how it is used, Kim argues. She contends that the data used to train algorithms may be biased in one context but neutral in another, meaning that the same model can produce different effects depending on where and how the model is applied. As a result, no predictive model can guarantee unbiased results, Kim argues.

Kim identifies two main approaches to addressing algorithmic discrimination. One focuses on  how established anti-discrimination law applies when employers rely on AI tools to make workplace decisions. The other takes a broader governance perspective, treating discriminatory algorithms as part of a wider problem of biased or unaccountable technology that influence opportunities and access in everyday life, not just in employment. Kim considers both frameworks but notes that existing anti-discrimination laws have proven inadequate for addressing how algorithms create or reinforce bias.

Anti-discrimination laws form the foundation for promoting fairness in the workplace, Kim explains. These laws prohibit employers from making decisions based on protected traits such as race, sex, or disability. Kim argues that, in principle, these same laws should apply whether a hiring decision is made by a person or an algorithm. She contends, however, that anti-discrimination law was designed for human decision-makers and can be difficult to apply to automated systems.

Kim distinguishes between “direct” and “indirect” discrimination. She explains that direct discrimination occurs when an employer takes an adverse action because of a protected characteristic, while indirect discrimination occurs when facially neutral policies have unequal effects on certain groups.

Neither anti-discrimination approach accounts for the structural nature of algorithmic bias, Kim argues. She explains that laws against direct discrimination, also known as disparate treatment, work best when decision-makers act with intent, but algorithms make decisions without conscious intent. Because algorithms act through data and design rather than deliberate choice, proving that an algorithm discriminated “because of” a protected trait is almost impossible, Kim contends.

Kim argues that although laws targeting indirect discrimination, or disparate impact, address unequal outcomes rather than intent, this approach still struggles to capture how algorithms create bias. She explains that workers must show that an algorithm disproportionately harms a protected group, while employers can defend the algorithm by claiming that it is a business necessity. Kim notes that this framework was designed for traditional hiring practices and can be difficult to apply to complex algorithms that rely on massive datasets that most employees cannot access.

Kim also notes that removing protected traits from training data does not necessarily eliminate indirect discrimination and may make bias harder to detect, further complicating the application of anti-discrimination law. She explains that algorithms can still rely on proxies for protected traits, such as geography or education, which reproduce the same disparities while concealing the origins of those disparities.

The limitations of anti-discrimination law reveal the need for an approach grounded in governance rather than individual liability, Kim argues. She notes that governance shifts the focus from individual wrongdoing to institutional responsibility, addressing how algorithms are designed, monitored, and audited.

Kim argues that governance approaches operate proactively rather than reactively. For example, Kim points to regulatory models that require companies to conduct bias audits or disclose how their algorithms make employment decisions. Kim argues that these forms of governance can catch problems before they harm workers, rather than waiting for individuals to file discrimination claims that are costly, time-consuming, and difficult to prove without access to an employer’s data.

Kim cautions, however, that governance alone cannot replace anti-discrimination law. Systemic oversight may improve institutional practices, but it does not offer individual workers a path to relief when they are harmed, Kim argues. She explains that governance regimes focus on prevention and compliance, not the compensation or redress necessary to remedy individual harms.

Both anti-discrimination and governance frameworks are essential to addressing discriminatory algorithms, Kim argues. Governance can help mitigate structural harms before they take root, while anti-discrimination law ensures accountability when bias occurs, Kim contends. She argues that using these frameworks together provides the layered protection needed to promote fairness in automated workplaces.