Scientists invented an AI to detect racist folks

A team of researchers at the University of Virginia has developed an AI system that attempts to identify and quantify the physiological signs associated with racial prejudice. In other words, they are building a wearable device that tries to identify when you have racist thoughts.

Front: No. Machines cannot tell if a person is a racist. Nor can they tell whether someone said or did something Racist. And you certainly can’t tell if you’re having racist thoughts by simply taking your heart rate or measuring your O2 saturation level with an Apple Watch-style device.

Still, this is fascinating research that could pave the way to a better understanding of how unconscious prejudice and systemic racism fit together.

How does it work?

The current standard for identifying implicit racial prejudice uses the so-called implicit association test. Basically, you look at a series of pictures and words and try to associate them with “fair skin”, “dark skin”, “good” and “bad” as quickly as possible. You can try it out for yourself here on the Harvard website.

There are also studies that indicate that learned threat responses to outsiders can often be measured physiologically. In other words, some people physically respond to people who look different than them, and we can measure it when they do.

The UVA team combined these two ideas. They took a group of 76 volunteer students and had them take the implicit association test while measuring their physiological responses with a wearable device.

Finally, the core of the study was to develop a machine learning system to evaluate the data and draw conclusions. Identifying a particular combination of physiological responses can really tell whether someone is experiencing for lack of a better expression involuntary feelings of racism?

The answer may be muddy.

According to the team’s research report:

Our machine learning and statistical analysis show that implicit distortions from physiological signals can be predicted with an accuracy of 76.1%.

But that’s not necessarily the end result. An accuracy of 76% is a low threshold for success in any machine learning endeavor. And flashing pictures of different colored cartoon faces are not a 1: 1 analogy for experiencing interactions with different races of people.

Take quickly: Any ideas that the general public may have about a wand-style device for detecting racists should be immediately rejected. The vital work of the UVA team has nothing to do with creating a wearable that pings you every time you or someone around you experiences their own implicit prejudices. It’s more about understanding the relationship between darker complexion mental associations and badness and the associated physiological manifestations.

In this respect, this novel research has the potential to shed light on the unconscious thought processes that are behind radicalization and paranoia, for example. It also has the potential to finally demonstrate how racism can be the result of an unintended implicit bias on the part of people who may even believe themselves to be allies.

You do not have to feeling As if you were racist to actually be racist, and this system could help researchers better understand and explain these concepts.

But it absolutely doesn’t Recognize preload; it predicts it, and that is different. And it certainly can’t tell if someone is a racist. It sheds light on some of the physiological effects associated with implicit bias, much like how a diagnostician would initially interpret a cough and a fever as such connected with for certain diseases, while more testing is needed to confirm a diagnosis. This AI does not denote racism or bias, it only indicates some of the side effects associated with it.

You can read all of the pre-printed paper here on arXiv.

Published on February 3, 2021 – 20:11 UTC

Comments are closed.