Targeted By Tech: Decoding Racism in Technology

Technology 25th May 2021 7 mins
Author:
Misha Zala

Over the last year, there’s been something of a global reckoning in terms of recognising racial injustice. The Black Lives Matter movement surged in the aftermath of George Floyd’s murder, while the world paused to reflect on how racism permeates every aspect of society – including technology.

From Instagram filters that make skin appear lighter to automatic taps that can’t detect darker skin tones, prejudice lurks beneath the surface of tech. Algorithms have inbuilt racial bias. In the US, automatic risk profiling software disproportionately targets Latinos as illegal immigrants.

Twenty years ago, direct and indirect racism was certainly widespread – but ethnic minorities didn’t have to worry that the technology they used every day was prejudiced against them. Now, they’re using systems functioning on code that is racist by design.

So, let’s delve deeper and explore some of the issues regarding racism in tech.

A major problem negatively affecting people of colour is facial recognition software – the kind in our phones, cameras and CCTV systems. In a world where tech is increasingly part of daily life, why doesn’t it work equally for everyone – and how can it discriminate against Black, Asian and minority ethnic people?

Smart doorbells use facial recognition software to help people identify their friends and family, or remotely turn away suspicious callers from their home. It’s certainly convenient, but do we fully understand how these devices work? The software is largely unregulated and known to provide false matches – and yet in the US, it has been used to make arrests.

In 2018, computer scientist and digital activist Joy Buolamwini studied 1270 faces through three widely-used facial recognition systems – Face++, IBM and Microsoft. She found that 35% of darker-skinned females were categorised as the wrong gender, versus less than 1% of lighter-skinned males.

When the study was repeated in 2019 to include Rekognition and Kairos, there were only minor improvements. Incredibly, when testing software on herself, Buolamwini found it was more successful when she, a Black woman, wore a white mask.

"The systems are built from pre-existing code designed by other White engineers, and to refine functionality, the code ‘learns’ by looking at more White people."

Rekognition software is used by Ring, which was acquired by Amazon in 2016. As well as for popular smart doorbells, Rekognition is used in marketing to track and classify images, and ‘pathing’ – tracking an object like a football, throughout a single video shot.

Following the George Floyd protests, Amazon suspended police use of Rekognition for one year. Previously, the software had been used to compare images with police mugshot databases containing thousands of images. The patent states that Rekognition has the power to alert the authorities if it believes it has identified a ‘suspicious person’.

So, why is this occurring? After all, aren’t algorithms designed to eliminate human bias?

"Now, companies know that they should be more inclusive, but this must go beyond quick diversity hires and ensure they retain top BAME talent."

One suspicion is biased training data – that is, the images the software is designed to learn from are too heavily focused on White faces. The systems are built from pre-existing code designed by other White engineers, and to refine functionality, the code ‘learns’ by looking at more White people instead of a diverse set of subjects.

What’s more, the National Institute of Standards and Technology tested over 100 facial recognition algorithms and found that Black and Asian faces were misidentified 10-100x more than Caucasians.

Additionally, default camera settings generally aren’t designed to capture Black faces well enough. This results in lower-quality database images and false matches – therefore misidentifying Black people. And so the cycle of inequality goes on.

Unsurprisingly, there is a lack of diversity within employees in technology. Here in the UK, the number of Black people working in tech is estimated by the British Computer Society as being at 1-2%. Meanwhile in the US, the proportion of Black technical workers at Apple is 6%, less than half of the 13% in the general population.

It should be noted that Asians are well represented within tech but this alone isn’t enough to ensure AI does not discriminate against Black, Asian and minority ethnic people. It’s suggested that systems work best on people of the same race as the engineer who created the code, which highlights the need for a diverse workforce. Now, companies know that they should be more inclusive, but this must go beyond quick diversity hires and ensure they retain top Black, Asian and minority ethnic people’s talent.

As a woman of Indian descent, back in the days when travel wasn’t illegal, I had learnt to bypass the automatic gates at the UK border and go straight to the staffed desks. Otherwise, I’d have to wait for several minutes until the computer told me to seek assistance.

I had put this down to a problem with the chip in my passport, but perhaps the software just couldn’t recognise my face? Does it think I’m an illegal immigrant, or a criminal?

Tech is increasingly involved in decisions we make – from managing the heating in our homes to applying for car insurance. So, the tech industry has a moral responsibility to work for all of us. It must eliminate inbuilt racial bias, test on diverse subjects and build systems that do not perpetuate harmful racial stereotypes. And then, we can all enjoy the benefits of a smart, well-connected home as equals.

 

Find out more about Misha’s life and work here.