YouTube is slowly deplatforming Trump to Fb, Twitter

Susan Wojcicki, CEO of YouTube.

Michael Newberg | CNBC

When it comes to social media and President Trump, one company’s actions have stood out: YouTube.

On Wednesday, January 6, President Trump delivered a speech that some supporters took as a call to violent action and sparked a violent uprising in the U.S. Capitol. The next day, Facebook announced that it would take the unprecedented move of preventing Trump from posting at least until his term ends on January 20, and possibly longer. Not long after that, Snapchat followed suit with a temporary ban that later became final. Twitter followed on Friday with a more dramatic move that permanently banned Trump’s account. Snap started with a suspension, followed by a ban.

It wasn’t until the following Tuesday that Google’s own YouTube announced that Trump would be temporarily suspended for a week – not because of a new rule, but because he violated a violence policy and thus hit one of the company’s three-strike rules. Trump’s account will remain online but will not be able to add new content until Tuesday, January 19, the day before Joe Biden takes office as president.

Trump’s YouTube homepage, meanwhile, is still automatically playing a 46-minute video with false allegations of election fraud. It’s been active for a month and had nearly 6 million views as of Friday (YouTube said it left the video because it was uploaded before the Safe Harbor deadline and appears next to the election results information box).

“YouTube is kind of an outlier because it currently stands out from other social networks and makes aggressive calls,” said Irina Raicu, program director for internet ethics at Santa Clara University.

Not a new approach

YouTube’s measured approach isn’t new. Numerous reports show how sluggish YouTube was after the 2020 elections to control misinformation.

In October, Facebook banned all accounts related to the false conspiracy theory QAnon, which spread voter misinformation and communicated plans for the events on Wednesday months earlier. In response, YouTube issued a carefully worded policy that effectively banned some QAnon content, but stopped banning it, citing gray areas classified as “marginal” content.

Some videos, spreading misinformation and inciting violence after election day, continued to show ads, meaning their creators were making money through the website, sometimes until a reporter notified the company. A month after the election, YouTube announced that it would remove content that falsely claimed widespread fraud related to the 2020 presidential election as the safe haven deadline for the elections had been met and several states had already confirmed their results.

It’s not clear why YouTube moves slower and more measured than its competitors when it comes to violations.

One possibility could be that YouTube and outsiders – like researchers and journalists – just find it harder to sift through video content to find violations. While most social media networks are primarily accountable to advertisers, YouTube has a close partnership with the developers. The company says the number of developers making more than $ 100,000 a year has increased 40% over the past year and is being paid, for example, over $ 2 billion has been owned by the company in the past five years copyrighted content issued. Being too quick to remove material could alienate these creators and create various types of advertising headaches.

Consistency the Right Step?

Alphabet CEO Sundar Pichai defended the company’s actions on Thursday when Reuters editor-in-chief Stephen J. Adler asked if his moves to curtail Trump’s account were too late and too little.

“If we find that content is violent, there is a warning and a three-strike process that depends on when it is applied,” Pichai replied. “We make these decisions to be consistent and clear and transparent about how we do it.”

Some experts praised the company’s ability to adhere to its guidelines, while others saw the need for more aggressive action.

“It’s interesting to hear them talk about strikes and regular rules when the other companies have admitted that these are unprecedented times and that, given the violence unraveled, they need to do something more aggressive,” said Raicu. “I think YouTube would argue that they would be fairer, but fairness also requires treating people who are in a similar situation, and we are not in that situation,” added Raicu.

Joan Donovan, director of research at Harvard Kennedy School’s Shorenstein Center on Media, cited YouTube’s action on Twitter as an example of “halving things.”

John Redgrave, CEO of Sentropy abuse detection software company, said he viewed YouTube’s actions as a way to avoid allegations of bias. “I think with more aggressive remedial measures come a lot of people wondering if this is your answer, why not turn off others who do this?”

Nevertheless, he considers YouTube’s approach to be too lax and points to its responsibility for the safety of users. “You need to do something in relation to the results – and investigate things when a person has a million more followers. Three blows until a ban on such a thing is too much.”

Harvard law professor Evelyn Douek, who has loudly criticized YouTube, took the opposite stance, saying that following the company’s guidelines should matter to something, as outright bans can create problems of their own.

“Hold on to your hats, but I think YouTube has – at least so far – handled the big deplatforming well,” Douek tweeted earlier this week. “A video was removed that violated a clearly (albeit belatedly) established rule against allegations of election fraud and did not remove the entire channel just because everyone else is doing it.”

The announcement underscores that “this decision is not at all about how it is perceived and only about normal application of the rules,” added Douek.

YouTube defended its guidelines by noting that it consistently enforces them and does not make exceptions for world leaders or anyone else.

Comments are closed.