The Rise of AI and Technology in Immigration Enforcement

Font Size:

Scholars explore how technological advancements impact immigrants’ privacy rights.

Font Size:

As the use and capabilities of artificial intelligence (AI) and technology increase, so do the potential risks in immigration enforcement practices.

In an Executive Order issued late last year, President Joseph R. Biden established new guidelines for the safe use of AI. The Order outlines how the new guidelines can better protect Americans from the privacy risks posed by AI. The only mention of immigrants in the Order, however, is in the section titled “Promoting Innovation and Competition,” which focuses on helping high-skilled immigrants study and work in AI fields in the United States.

In recent years, law enforcement officials have relied increasingly on AI as border and immigration management tools. In 2021, the U.S. Department of Homeland Security received over $780 million for technology and surveillance at the border.

As the U.S. government grapples with complex immigration challenges, the role of AI in immigration enforcement has taken various forms, including facial recognition systems at border crossings and algorithms designed to predict the potential outcomes of asylum claims.

Proponents of AI use in immigration enforcement argue that these technologies facilitate expedited processing and vetting of cases, with the potential to shrink the backlog of cases facing immigration courts and agencies. They contend that AI systems enable authorities to allocate resources more effectively to ensure border safety.

Despite the potential benefits, critics question the operational efficiency of AI tools and the risks AI poses for immigrants’ privacy rights and civil liberties.

Proponents of privacy and civil liberties argue that using AI in immigration enforcement may erode privacy rights and “infringe on the human rights of both foreign and U.S. nationals.” They also express concerns about the accuracy of AI systems because of biases embedded in algorithms that disproportionately affect minority groups.

Amid the increasing use of AI by various sectors, governments worldwide are seeking to establish regulatory frameworks that harness the potential of AI while mitigating its risks. For example, in 2021, the European Union proposed the Artificial Intelligence Act (AI Act). The drafters of the AI Act sought “to ensure better conditions for the development and use” of AI technologies. Similar to the EU’s General Data Protection Regulation, proponents of the AI Act believe that the law has the potential to be “the global standard” for AI regulation, use, and privacy protections.

Despite the EU’s “landmark” AI regulation policy, immigrants’ rights advocates criticize the legislation and find that it fails to protect the most vulnerable—immigrants.

In this week’s Saturday Seminar, scholars and advocates for immigrants’ rights examine global AI policies that impact migrants and suggest reforms to protect against privacy and rights violations.

  • As the use of digital border control technologies in immigration enforcement increases, a nuanced ethical framework is needed to protect migrants from privacy and liberty violations, argues Natasha Saunders of the University of St. Andrews in an article for the European Journal of Political Theory. Saunders notes that although states have the right to enforce immigration laws, doing so with digital technologies—such as data profiling, biometrics, and data sharing—poses ethical challenges. She explains that these technologies not only risk infringing on individuals’ liberties and privacy, but may also perpetuate discrimination by profiling based on biased or incomplete data. To address these challenges, Saunders calls for data protection legislation and other reforms in digital immigration enforcement practices.
  • In an article for Justice, Power and Resistance, Hanna Maria Malik and Nea Lepinkäinen of the University of Turku argue that although automated decision-making offers a potential solution for Finland’s asylum application backlog, its benefits should be balanced against its potential harms. Malik and Lepinkäinen acknowledge that because Finland has strong artificial intelligence accountability mechanisms, it is a helpful case study through which to explore the impact of AI algorithms. The authors argue that despite a broad interest in protecting human rights in Finland, economic efficiency motivates the government’s use of AI in public administration. They note that economic concerns driving AI policies may undermine the values that should drive immigration policy.
  • The Canadian Government’s use of predictive analysis and automated decision-making systems in immigration decisions may lead to privacy breaches and undermine immigrants’ rights to be free from discrimination, contends Mayowa Oluwasanmi, a graduate student at Queen Mary University of London, in an article for Federalism-e. Oluwasanmi warns that automated immigration decisions can reinforce existing biases and incorrectly categorize “people from a certain group as being ‘higher risk’” or eligible for further vetting. In addition, the author notes that automated decision systems may infringe on immigrants’ privacy rights because these systems require mass amounts of data accumulated through surveillance practices that disproportionately target marginalized communities. Oluwasanmi argues that such practices may violate Canadian and international human rights laws.
  • In an articlefor Data & PolicyKarolina La Fors and Fran Meissner of the University of Twente question whether the use of AI in border enforcement can ever be ethical. La Fors and Meissner apply a “guidance-ethics approach”—which considers the feasibility of dialogue between stakeholders in the development of technology— to evaluate the ethics of border AI from the perspective of migrants. La Fors and Meissner conclude that the ethics of such technology appear “bleak” under this framework. They explain that power differentials between governments and migrants make meaningful dialogue unlikely. To make border AI more ethical, La Fors and Meissner suggest that policymakers should develop alternative approaches in collaboration with migrants who are impacted by these AI tools.
  • Although the use of databases, surveillance technology, and biometric data offer some benefits, these methods of collection by immigration enforcement also raise significant ethical and legal issues, argues practitioner Inma Sumaita in an article for the University of Cincinnati Intellectual Property and Computer Law Journal. Sumaita notes that state legislation, such as Illinois’s Biometric Information Privacy Act, which requires opt-in consent before an agency can collect a person’s biometric information, could inspire similar protections nationally. She also suggests that the United States seek guidance from European rights frameworks in developing these protections. To ensure that technological advancements in immigration enforcement practices do not harm immigrants’ privacy rights, Sumaita urges the U.S. Congress to perform its “constitutional duty to protect the substantive rights of all individuals.”
  • Governments should be more transparent about their use of automated decision-making in immigration, practitioner Alexandra B. Harrington contends in an article for the New York State Bar Association. Harrington warns that automation bias—a tendency of people to believe an algorithmic output “even when it contradicts their instincts or training”—could lead to the deprivation of migrants’ rights. But because governments are withholding information about how algorithms are used at the border, she argues that experts are unsure if or how rights are being violated. Harrington suggests that international lawmakers should solve this problem by creating uniform frameworks for the use of automated decision-making in immigration assessments. To increase transparency, she contends that such policies should include human review of some automated decisions.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.