The Stanford crew behind BS gaydar AI says facial recognition can reveal political orientation

Stanford researcher Michael Kosinski, who did the PhD behind the infamous “Gaydar” CI, is back with another piece of pseudoscientific ridiculousness (his team swears it’s not phrenology). This time around, they released a paper indicating that a simple facial recognition algorithm can identify a person’s political affiliation.

First things first, the paper is called “Face Recognition Technology Can Uncover the Political Orientation of Naturalistic Face Images.” You can read it here. Here’s a bit of the summary:

The ubiquitous face recognition technology can reveal the political orientation of the individual as the faces of liberals and conservatives are consistently different.

Second, these are proven false statements. Before we even discuss this paper, I want to make it clear that Kosinski and his team’s ideas have absolutely no value here. Face recognition technology cannot reveal the political orientation of the individual.

[Related: The Stanford gaydar AI is hogwash]

For brevity, I’ll summarize my objection in one simple statement: I once knew someone who was liberal, and then they got conservative.

It’s not mind-blowing, but the point is that political orientation is a fluid concept. No two people tend to “orient themselves” to a particular political ideology in the same way.

Some people also don’t care about politics, others have no idea what they actually support, and still others believe that they agree to one party, but in their ignorance they fail to realize that they are actually supporting the ideals of one another.

Also, since we know that the human face is unable to reconfigure itself like the creature from “The Thing”, we know that we don’t suddenly get a liberal face when either of us decides to stop supporting Donald Trump and supporting Joe Biden.

This means that the researchers claim that liberals and conservatives express themselves differently, carry one another, or do one another differently. Or they say you were born liberal or conservative and there is nothing you can do about it. Both statements are almost too stupid to consider.

The study claims that demographics (white people are more conservative) and labels (given by people) were determinants of the AI’s separation of people.

In other words, the team starts out on the same undeniable false premise as many comedians: that they exist only two types of people in the world.

According to the Stanford team, AI can determine political affiliation with greater than 70% accuracy, which is better than chance or human prediction (both are approximately 55% accurate).

Here’s an analogy on how you should interpret the Stanford team’s accuracy claims: I can predict with 100% accuracy how many lemons in a lemon tree are aliens from another planet.

Because I’m the only person who can see the aliens in the lemons, I call a “database”. If you want to train an AI to see the aliens in the lemons, you’ll need to give your AI access to me.

I could stand there next to your AI and point at any lemons that contain aliens. The AI ​​took notes, chirped the AI ​​equivalent of “mm hmm, mm hmm” and started figuring out what the lemons I’m pointing to, which leads me to believe aliens are inside them.

Eventually, the AI ​​would look at a new lemon tree and try to guess which lemons I would superior have lemons in them. If it were 70% accurate which lemons to guess I superior If there are lemons in them it would still be 0% accurate determine What lemons do aliens have in them. because Lemons have no aliens in them.

That, readers, the Stanford team did here and with their goofy Gaydar. You have taught an AI to draw conclusions that don’t exist because (this is the important part): There is no definable scientifically measurable attribute for the political party. Or weirdness. You can’t measure up Liberality or conservatism, because like with gayness there is no definable threshold.

Let’s get gay first so you understand how stupid it is to say that a person’s facial makeup or expression can determine such intimate details about a person’s core being.

  1. If you’ve never had sex with a member of the same sex, are you gay? There are “straight” people who have never had sex.
  2. If you are not romantically attracted to members of the same sex, are you gay? There are “straight” people who have never been romantically attracted to members of the opposite sex.
  3. If you used to be gay but quit, are you straight or gay?
  4. If you’ve been straight but quit, are you straight or gay?
  5. Who is the governing body that determines whether you are straight or gay?
  6. If you have romantic relationships and sex with members of the same sex but tell people you are straight, are you gay or straight?
  7. Are there bisexuals, asexuals, pansexuals, demisexuals, gays for a fee, heterosexuals or just generally confused people? Who will tell you if you are gay or straight?

As you can see, weirdness is not a rational commodity like “energy” or “number of apples on the table over there”.

The Stanford team used “basic truth” as a measure of gayness by comparing pictures of people saying “I am gay” with pictures of people saying “I am straight” and then playing around with the parameters of the AI ​​( like setting an old radio signal) until they have achieved the highest possible accuracy.

Think of it this way: I’ll show you a portrait sheet and say, “Point out to those who like World of Warcraft.” When you’re done, if your guess is no better than pure chance or the person sitting next to you, I say, “No, try again.”

It takes thousands and thousands of attempts until one day I say “Eureka!” if you can finally get it right

They didn’t learn how to tell World of Warcraft players apart from their portraits, they just learned to get this sheet right. When the next hand comes up, you have a literal 50/50 chance of correctly guessing whether or not a person in a particular portrait is a WoW player.

The Stanford team cannot define weirdness or political orientation like cat. You can say this is a cat and that is a dog because we can objectively define nature exactly what a cat is. The only way to tell if someone is gay, straight, liberal, or conservative is by asking them. Otherwise just watch them appearance and action and decide whether she think they’re liberal or weird or something.

The Stanford team asks an AI to do something no human can – namely predict a person’s political affiliation or sexual orientation based on their appearance.

The bottom line here is that these stupid little systems use basic algorithms and neural network technology from half a decade ago. They’re not smart, they are just twisting literally the same technology that is used to determine if something is a hot dog or not.

There is no positive use case for this.

Worse, the writers seem to be drinking their own Kool Aid. They admit that their job is dangerous, but they don’t seem to understand why. In this Tech Crunch article, Kosinski says (referring to the Gaydar study):

We were really concerned about these results and spent a lot of time thinking about whether they should be published at all. We didn’t want to activate the very risks that we were warning about. The ability to control when and to whom sexual orientation should be revealed is critical not only to wellbeing but also to safety.

We felt there was an urgent need to raise awareness among policy makers and LGBTQ communities about the risks they face. We have not created a privacy intrusion tool, but have shown that basic and widely used methods pose serious threats to privacy.

No, the results are not scary because they can rule out queers. They are dangerous because they could be abused by people who believe they can. Predictive policing isn’t dangerous because it works, it’s dangerous because it doesn’t: it just excuses historical policing patterns. And this latest goofy AI development from the Stanford team isn’t dangerous because it can determine your political affiliation. It is dangerous because people might believe it is possible, and there is no good use for a system designed to invade a person’s ideological privacy, whether it works or not.

Published on January 14, 2021 – 20:41 UTC

Comments are closed.