Canada’s plans to use AI in immigration are risky and unethical, say Toronto human rights researchersNovember 21, 2018, 9:57 am,
The Canadian government has been testing the use of AI in immigration since 2014
A report released in September 2018 titled “Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System” questions two proposed pilot projects that aim to integrate AI into immigration screening in Canada. Written by Petra Molnar and Lex Gill, human rights researchers at the University of Toronto’s Faculty of Law, the report cautions against dependence on AI for such decisions, warning it could have unfair and irreparable consequences on the hundreds of thousands of immigrants, including tens of thousands of refugees, who seek to enter Canada every year.
According to Molnar, Canada has been experimenting with the use of predictive analytics in immigration since at least 2014. And in a tender notice submitted to the private sector in April 2018, the Canadian government states its interest in promoting efficiency within immigration claims by using AI-powered data analysis to facilitate decisions regarding Pre-Removal Risk Assessment applications and Humanitarian & Compassionate consideration applications. The report underlines the fact that these two groups of applications pertain to one of the most vulnerable segments of society.
AI algorithms still far from perfect
Molnar cites recent mistakes made by similar algorithms, such as a case that came to light in May 2018 whereby more than 7000 students in the UK had faced deportation after being wrongfully accused of cheating on a language test. The AI-powered analysis was found to have an error rate of 20%.
Also in 2018, Reuters investigated the strange occurrence of a significant increase in the number of detained illegal immigrants that had been denied bond pending their trial. Since 2013, US Immigration and Customs Enforcement (ICE) has used a computer-based Risk Classification Assessment to make decisions regarding whether or not a detained illegal immigrant can be released on bond based on factors like flight risk and danger to society. Reuters discovered that the algorithm had been modified in 2017 to recommend “detain” in 100% of cases. The decision can be overridden by ICE personnel, but that, we know, is a Kafkaesque procedure.
Canada at risk of violating fundamental human rights
Molnar criticizes the use of automated decision-making systems in refugee and immigration claims, saying these are complex cases that cannot be fully understood by technology and are better left to people like immigration officers, border agents and other officials. She warns that if Canada adopts artificial intelligence in its immigration offices, the resulting bias and discrimination may put the country at risk of breaching ethical covenants such as the Canadian Charter of Rights and Freedoms and the United Nations Convention on the Status of Refugees.
Molnar also expresses her discomfort with the increasing partnerships between governments and the private sector in developing technology that is driven by data collection. Amazon’s “Rekognition” technology, for example, uses deep learning to identify and track individuals in crowded public spaces. The American Civil Liberties Union has cautioned that the technology risks being spread to authoritarian regimes and used for nefarious purposes.
Of course, there is no stopping technology once it is developed, nor monitoring it once it escapes into the wider world. The partnership between data-hungry corporations and control-hungry governments may just be the perfect recipe for an Orwellian world. Part of the solution, according to Molnar, is for private enterprises to self-regulate the ethics of their own research and development. But the report also calls for the government to lay down ethical guidelines and assign a task force to monitor the effects of AI on human rights.
The world is not ready to see AI play a role in immigration decisions
In an interview with The Globe and Mail, Molnar reminds us that “we often falsely assume that technology is mechanical and objective, even though its algorithms are designed by human beings who hold various biases.” And although humans are also flawed, we have the capacity to correct for these flaws based on an empathetic, compassionate point of view, something that is still underdeveloped in AI.
Canada is a major player when it comes to immigration, and the precedents it sets will likely have consequences for asylum seekers around the world. In recent years, we have seen countless examples of bias in AI technology. Is it ethical to adopt AI technology in such sensitive areas before eliminating the very real risks of bias and inaccuracy?