Article by Rudi-Ann Miller,
Several countries have turned to emerging technologies to attempt to solve the intractable issues of security and efficiency associated with the influx of migrants. These governments have integrated advanced digital technologies—many of which were previously used in military capacities—into their border security efforts. This technology toolbox often includes instruments such as drones, satellite surveillance, biometric readers and scanners, and artificial intelligence (AI). AI, in particular, has been touted as a fast and effective solution to reduce the backlog on permanent residency and asylum requests.
But while the implications of the use of these technologies in this context are far-reaching, this technology proliferation is occurring virtually unchecked. These technologies have the potential to threaten the fundamental freedoms of extremely vulnerable populations, yet they remain unbridled by any rigorous scrutiny, ethical safeguards, or compliance mechanisms. Should AI be used in making decisions that have life-or-death consequences? Can we trust that the creators of these technologies will have all human interests at heart? Can AI truly improve human decision-making and remove bias from the immigration process?
This article, which draws mostly from the research presented in Petra Molnar and Liz Gill’s 2018 report, illustrates how AI, in particular, has been introduced into the immigration decision-making processes in the United States (US), Canada, and the European Union (EU). It examines how AI infringes on human rights including the rights to life, non-discrimination, security, and due process. It proposes that, in addition to efficiency, AI is a tool used to validate the protectionist immigration policies of the developed world.
I. AI, by virtue of how it learns to process information, often produces biased output
AI refers to the group of technologies that use methods such as machine learning and predictive analytics to aid or replace human judgment. AI systems operate by processing large amounts of data using intelligent algorithms to generate an output. This allows the system to learn from patterns or features in the data.
To begin this learning process, a human AI programmer feeds the AI system a basic set of “training data,” which could include, for instance, past decisions on immigration applications, photographs, or statistics. The AI system then learns how to generate correct versus incorrect outputs based on how this data is classified and/or interpreted by the human programmer. As it improves its accuracy, the AI system is exposed to greater amounts of data.
But, AI is only as good as the training data it receives. AI systems learn by adopting the programmer’s interpretations of the data. Therefore, if the training data is itself biased, AI propagates these biases. For example, most AI systems used in facial recognition software were fed training data that usually reflected the identities of their programmers—white and male. As a result, most facial recognition software are incapable of accurately identifying people of color. The human “values, assumptions, biases, shortcomings, and blind spots involved in the selection or substantive content of training data—as well as the types of input data deemed ‘relevant’ to an automated system’s decision-making process—impact outputs.” However, unlike a human where biased outcomes may not be definite in every situation, an AI system trained on biased data will be substantially certain to yield the biased output every time it is used. Therefore, the assumption that technology is inherently unbiased has led to unchecked use and abuse when, in fact, the very processes by which these machines learn to make decisions can only be biased.
II. The United States, Canada, and the European Union use AI in immigration decision-making processes
Despite the significant probability of bias, AI has increasingly been used to augment or replace human decision-making in immigration. At the US-Mexico border, AI is used by Immigration and Customs Enforcement (ICE) officials to determine whether or not a migrant should be placed into immigration detention. Since 2013, ICE’s Risk Classification Assessment (RCA) tool has used statistics to determine whether a migrant poses a flight risk or a danger to society. Previously, the RCA would review a migrant’s record and interview transcripts to recommend either that ICE “detain” or “release” an individual. In 2017, however, in response to US President Donald Trump’s executive order on border security, ICE removed the “release” option. Now, the RCA automatically detains all persons subjected to its analysis. As a result of this change, the number of migrants placed in detention tripled to more than 43,000 in 2017 from the year prior.
Similarly, Canada has experimented with the use of AI technologies in immigration since at least 2014. As Petra Molnar and Liz Gill explain, the Canadian government has been developing an AI system to automate the work of immigration officials in reviewing immigration applications. These AI systems make decisions on the truthfulness or accuracy of a migrant’s application such as the authenticity of their marriage to a Canadian citizen or whether an applicant should be identified as a risk. Canada’s use of AI is especially concerning because they have applied the AI system to the review of the Humanitarian and Compassionate Applications and Pre-Removal Risk Assessments, which are used by migrants fleeing war and violence. To use a flawed technology in such a context could have life-or-death ramifications because the principle of non-refoulment may not be upheld. Rejected applications could send migrants back into the dangerous environments from which they are fleeing.
The EU has also begun testing AI technology. In 2018, IBORDERCTRL, an AI lie detector, went into effect in Hungary, Latvia, and Greece at those countries’ borders with non-EU nations. IBORDERCTRL functions as a virtual border guard that asks people questions about their previous movement and discerns the truth of their statements. Immigration candidates answer these questions while facing a webcam. The system then analyzes and rates dozens of facial micro-gestures. If IBORDERCTRL determines that the person is telling the truth, they are granted entry. If the system suspects a person is lying, they are transferred to a human border agent who reviews the information and makes an assessment. Systems such as IBORDERCTRL have high error rates and often make faulty assumptions about what constitutes credible information. The system may not be cognizant of differences in cross-cultural communication, such as how head-nodding may mean “yes” in some cultures, but “no” in orders. The system also may not know how trauma impacts memory.
III. AI could have an extraordinary impact on the human rights of migrant populations
Without proper safeguards, the use of discriminatory and biased AI systems may severely affect the human rights enshrined in international treaties such as the International Covenant on Civil and Political Rights, the International Convention on the Elimination of All Forms of Racial Discrimination, and the Convention Relating to the Status of Refugees.
Migrant communities are particularly vulnerable to human rights abuses because of their race, national origin, and legal and economic status. AI could entrench discrimination by consistently making decisions that adversely impact specific groups of people. AI also affects due process because, in using a machine instead of a human, migrants may not get a proper individual assessment.
AI may also change our fundamental understanding of the legal system as human decision-makers are augmented or replaced with automated decision-makers. For example, should one pursue a legal challenge to an immigration decision made by AI, to whom do you make your formal complaint? The algorithm itself, the immigration officer who could have overruled the AI system’s output, or the AI system’s creator—where does the responsibility lie? Should the AI system have an independent legal personality that can be taken to court? Judges will have to start thinking about these basic questions of responsibilities and redress.
IV. Recommendations
Technology has begun to redefine global migration and human rights law. However, before governments further adopt and normalize the use of AI in immigration decision-making, we must more critically examine its use. Decisions to augment or replace human decision-making with AI must be more transparent. Cost-benefit analyses of immigration technologies must give greater import to human rights rather than bureaucratic efficiency or protectionism. Government officials must be held accountable for outcomes. Lastly, interdisciplinary groups of scholars and human rights activists must work together to illuminate the issues and potential impacts. The US, Canada, and EU each have a duty as first-movers in this area to set an example by regulating the use of these technologies in accordance with human rights doctrines.
Comments