Appunti del direttore

2021-10-11 Zenia Simonella

When Discrimination Comes from an Algorithm

Companies are increasingly trusting algorithms to accelerate their personnel selection processes. However, the scarce attention paid to the issue of inclusion in the use of these tools can lead to true cases of discriminatory practices in regard to male and female workers with different ethnic origins, abilities/disabilities, gender identities, and so on; a risk companies need to be aware of.

 

A short time ago, the website of the magazine Internazionale published an interesting video entitled "When Artificial Intelligence Decides Whom To Hire."[1]

The video shows that the absence of diversity in the profile of programmers and the scarce attention paid to the issue of inclusion have generated algorithms that discriminate between male and female workers of different ethnic origins, abilities/disabilities, gender identities, and so on. 

These algorithms are increasingly used by companies in their selection processes. There is now discussion of 'algorithmic hiring,' i.e. the use of "trained" software, based on certain assumptions, to determine if the candidate is suitable for the position to be filled. No problem, thus far, except that, by setting a series of filters, it is possible to exclude a certain target that has some characteristics, for example those in a certain age bracket or those who have a certain type of education, thus adopting practices of true discrimination.[2] Therefore, algorithms incorporate the prejudices of those who design them or set them, but become potentially more dangerous, because they are covered by an aura of neutrality, given that they are automated processes in which subjectivity does not appear to be involved.

Alessandro Vespignani tells the story[3] of a researcher at MIT who felt on her own skin (literally!) how discriminatory an algorithm can be. She realized that when she sat in front of a facial recognition camera, the computer didn't recognize her face, so she had to ask for help from a colleague, who was recognized easily. The only difference was in the color of their skin: she was Afro-American, he was white. So she decided to conduct a study on facial recognition systems. When the subject was a male face errors were rare; when it became time to identify the faces of women of color, the algorithm committed many more errors. This, because "most of the neural networks that classify images are trained thanks to ImageNet, an immense archive of over 14 million pictures labeled using multiple words or entire sentences, with a significant defect: 45 percent of the data comes from the United States, where only 4 percent of the world population lives, while China and India contribute only 3 percent of the archive."[4]

Trusting these systems in the selection process thus entails risks of which companies must be aware.



[2] A. Aloisi, V. De Stefano, Il tuo capo è un algoritmo. Contro il lavoro disumano, Bari, Laterza, 2020, p. 53.

[3] A. Vespignani, L’algoritmo e l’oracolo, Milan, Il Saggiatore, 2019.

[4] Ibid, p. 103 ff.

iStock-1182833211