Algorithms behave badly: 2020 version

The dangers of leaving important decisions to computer algorithms are fairly easy to imagine (see e.g. “Minority Report”, “I, Robot”, “War Games”). In recent years, however, the job descriptions of algorithms have only grown.

They replace people when it comes to making tough decisions that companies and government agencies say are based on statistics and formulas rather than the jumbled computations of a human brain. Some health insurers use algorithms to determine who receives medical care and in what order, rather than leaving that choice to the doctors. Colleges use them to decide which applicants to accept. And prototypes of self-driving cars use them to weigh up how damage in a traffic accident can be minimized.

Part of this computer-assisted outsourcing rises from high hopes – that computer algorithms, for example, would interfere with the lending process or help researchers develop a safe COVID-19 vaccine in record time.

But it has been proven time and time again that formulas inherit the prejudices of their creators. An algorithm is only as good as the data and principles that train it, and a person or persons are largely responsible for what it feeds.

Each year there are countless new examples of algorithms that were either designed for a cynical purpose, exacerbate racism, or spectacularly failed to fix the problems they were designed to solve. We know most of them because whistleblowers, journalists, lawyers, and academics took the time to delve into a black box of computer-aided decision making and find dark materials.

Here are some big ones from 2020.

The racism problem

Many problems in algorithmic decision making are biased, but some cases are more explicit than others. The markup reported that the Google ad portal linked the keywords “black girls”, “asian girls” and “latina girls” (but not “white girls”) to porn. (Google blocked the automated suggestions after The Markup contacted the company. Meanwhile, Google’s search algorithm briefly sent our story to the first page of search results for the word “porn.”)

Sometimes the consequences of such distortion can be severe.

Some medical algorithms are intentionally racially biased. An article in the New England Journal of Medicine identified 13 examples of “race corrections” built into tools that doctors use to determine who will receive certain medical procedures, such as heart surgery, antibiotics for urinary tract infections, and breast cancer screenings. The tools assume that patients of different races are at different risk for certain diseases – assumptions that researchers believe are not always well founded. The result: A black man in need of a kidney transplant was classified as ineligible, among other things, as reported by consumer reports.

A related problem emerged in a lawsuit against the National Football League: Black players claim it is much harder to get compensation for concussion-related dementia because the league assesses neurocognitive function. Essentially, it is said, the league assumes that black players are inherently less cognitive than white players, and weighs their eligibility to pay out accordingly.

Algorithms that make life difficult for tenants and people on lower incomes

If you’ve ever rented a home, and chances are now that rent has skyrocketed since the 2008 financial crisis, a landlord likely took you through a tenant screening service. Regardless of the results that the spat out background check algorithms generally make up, the difference between renting the house in question and being rejected is – and according to The Markup, these reports are often flawed. The computer-generated reports confuse identities, misunderstand minor violations with law enforcement agencies as criminal records, and misreport evictions. And what is little overlooked usually comes too late for those wrongly refused.

Similarly, according to MIT Technology Review, attorneys working with low-income people encounter unfathomable, unaccountable algorithms developed by private companies that decide, for example, which children will be fostered and assigned Medicaid services and access to them determine unemployment benefits.

Police work and persecution

The idea of ​​predicting crimes before they happen has a lingering appeal, even after police discovered problems with data-driven models after police discovered.

Case in point: the Pasco County’s division, which was routinely monitored and harassed by the Tampa Bay Times, identified individuals who were identified as potential criminals. The department “sends MPs to find and question anyone whose name appears” on a list made up of “arrest stories, unspecified information and arbitrary decisions by police analysts,” the newspaper reported. MPs came to people’s homes in the middle of the night to conduct searches and wrote tickets for small things like missing PO box numbers. Many of those affected were minors. The sheriff’s department replied that the newspaper was examples of cherry picking and that it linked legitimate police tactics to harassment.

Facial recognition software, another algorithmic tool related to police work, resulted in the flawed arrest and detention of a Detroit man for a crime he did not commit, the New York Times reported in an article discussing privacy, accuracy and racial issues of the technology.

And in a particularly frightening development, the Washington Post reported, the Chinese technology company Huawei has tested tools that can search faces in crowds for ethnic characteristics and send “Uighur alerts” to authorities. The Chinese government has arrested members of the Muslim minority en masse in prison camps – the persecution appears to be increasing. Huawei USA spokesman Glenn Schloss told the Post the tool “is just one Test and It has not seen any real application. “

Workplace surveillance

Large employers are turning to algorithms to monitor their workers. This year, Microsoft apologized after enabling a Microsoft 365 feature that managers could use to monitor and analyze the “productivity” of their employees. For example, the productivity score takes into account a person’s participation in group chats and the number of emails sent.

Meanwhile, Business Insider reported that Whole Foods used heatmaps, which, for example, weighed a number of employee complaints and the local unemployment rate, to predict which stores might trade union attempts. Whole Foods is owned by Amazon, which has a sophisticated device for monitoring worker behavior.

Revenge of the students

Anyone looking for inspiration battling an algorithm-dominated morning could turn to students in the UK who took to the streets after the education system decided to use an algorithm to award grades based on previous performance during the pandemic. Or those kids who discovered that their tests were scored by an algorithm – and then immediately figured out how to exploit them by essentially lumping together a series of key words.

This article was originally published on The Markup and republished under the Creative Commons Attribution-NonCommercial-NoDerivatives License.

Published on December 22nd, 2020 – 13:00 UTC

Comments are closed.