It is sometimes said that artificial intelligence will help bridge the gaps in society. This seems a noble goal considering all the biases and prejudices in our world. Nevertheless, this is a misguided perception. Artificial intelligence is not born in a vacuum and is as such dependent on its environment which is plagued with biases.
A perfect example of this situation was when the GPT-3 model developed by OpenAI was released for a beta test in July 2020. The model in question was a language model that can output text in response to a prompt. Researchers found different situations in which the model displayed gender, religious, and even racial biases. One instance of this is the sentiment analysis reported for different races. In this case, Asians scored consistently higher, especially when compared to Black people who tended to be at the bottom of the graph* in almost any iteration. Moreover, when testing occupational bias related to gender roles, the model was more likely to assign it to a man, especially if the job in question required high levels of qualifications or physical strength. The situation reversed when it was tested on jobs that were sadly associated with female labour, such as nurses or receptionists. Lastly, researchers found that the words that the model associated with specific religions were quite negative. For example, there were occurrences of the word “terrorism” when referring to Islam, “judgemental” when referred to Christianity. A more recent example is one that displayed the racial bias found in facial recognition software, from different vendors such as Twitter and Zoom. This controversy sparked from the Auto Crop feature that enables users to see only a preview of a picture uploaded to Twitter instead of its full size so that all images are the same size on a user’s feed. In several tests performed by users, it was found that the algorithm had a tendency to crop out black people’s faces. Even if Twitter tested the tool before its actual release, it appears it was not enough as this behaviour was not an isolated situation.
While this paints a somewhat bleak picture of the situation regarding AI, it is important to remember a few things. In the first case, the same team in charge of developing the GPT-3 model was actively looking to analyze and record these tendencies to be able to correct them. In the second case, even though it is a serious error, the company is working hard to correct it. But this is not nearly enough, as these kinds of algorithms are being increasingly used in sensible matters such as law enforcement. It is important to assess and prevent these errors. If you want to learn more about this pressing issue check out the NY Times piece on the wrongful incarceration of Robert Julian-Borchak Williams due to his use of a facial recognition algorithm. **
*https://arxiv.org/pdf/2005.14165.pdf (Graph in page 38)
Comments