Home Commercial Awareness UN Report Reveals That Tech is Not Free from Racial Bias

UN Report Reveals That Tech is Not Free from Racial Bias

by Safiyyah Khalique

By Saffiyah Khalique

Earlier this month, the UN Special Rapporteur on Discrimination, Professor Tendayi Achiume, released a report uncovering racial and xenophobic discrimination in new digital technology. The report expands upon how this is utilised within immigration enforcement, and the dangers it has on the safety and rights of refugees and immigrants.

Some might argue that technology is less biased that humans. Countries seeking to use tech in border enforcement may argue it a useful strategy to ensure ‘objective’ decision-making and useful to the process of data collection. This is not quite the case, however, as several investigations have found that the over-reliance upon and new use of tech in immigration departments can make systemic racism worse.

European Digital Rights (EDRi) carried out research on the effects of technological experiments on marginalised communities crossing borders. They found that technology is not objective or less harmful that humans: In fact, by using it to police borders it can make discrimination worse and cause lives to be lost.

EDRi’s research in Greece found that typically, in places where regulation of these technology is limited, surveillance is deployed at the expense of human life. It has also been found that using technology violently reinforces border militarisation, resulting in trauma and deaths at the borders. Such incidents include the multiple drownings in the Mediterranean Sea and the separation of children from their parents at the U.S.-Mexico border.

Professor Achiume told The Guardian that she was concerned by how digital technology can be used to regularly breach human rights. She elaborates on this, stating that it was worrying how humanitarian agencies were engaging in surveillance.

Facial recognition technology and eye detection technology have been two of the methods used to carry out surveillance of migrants. In some cases, usage of these are required prior to the migrant being able to seek help. Professor Achiume notes a case where refugees in Afghanistan have to register their eyes before being allowed help. These methods can be harmful as they perpetuate discrimination by helping the over-policing of racialised communities. They are also incredibly dehumanising towards already vulnerable individuals.

Palantir was one of the companies to be noted in the report, as they were found to have assisted in the detention and deportation of migrant families. The company had patterned with the UN’s World Food Program, where they provided software services to the program’s data governance in exchange for approximately $45 million.

How can this be changed for the better? There needs to be better government regulation of the technology being used, alongside consultations with organisations that work with refugees, immigrants and other marginalised groups that try to cross borders. Such groups are already in existence: ‘Migration and Technology Monitor’ is a new group consisting of journalists and academics, who call for the suspension of these border enforcement technologies until independent human rights assessments have been carried out. This is echoed by Achiume, who agrees that the use of such technologies at the border should be paused until we can fully understand the detrimental impacts they may have on migrants and their human rights.

Donate & Support

You may also like

Leave a Comment