According to Leslie Miller, YouTube vice president of government affairs and public policy, the platform will remove politically misleading videos. These technically manipulated or edited to mislead YouTube subscribers may pose a “serious risk of egregious harm.”
Policies on malicious and slander by deed are new, but only brought up again in hopes of stopping misinformation. This was part of the site’s efforts to assist well-informed voting for the upcoming 2020 election.
Last year, the company took down an altered video of House Speaker Nancy Pelosi speaking at an event. This speech, held by the Center for American Progress, underwent modification to make her sound intoxicated.
President Donald Trump shared one of the clips on Twitter with the caption in all caps: “PELOSI STAMMERS THOUGH NEWS CONFERENCE.” It was edited to repeat her words, making her appear confused and ill.
Videos of the same elements circulated Facebook as well. But neither of the blogging sites took them down. One version of these clips was viewed 2 million times and shared more than 45,000 times in two days.
Twitter didn’t respond to questions but doesn’t show policies that prohibit manipulated information on the platform. Facebook already disapproved of removals of false information in the much bigger platform.
Throughout the video’s stir, a YouTube spokesperson reassured the public that the video didn’t “surface prominently.” Search results and watch next panels about Pelosi were those from authoritative sources.
YouTube and Harassment
YouTube still has its share of controversies, including algorithms that help harassers and misleading reporters alike.
As most of its audiences know, the site offers recommendations based on how long people watch a video. This gives creators to profit greatly from misleading headlines and thumbnails.
This algorithm guides more than 70 percent of what people watch on the site.
Underneath the addicting feature lies great power for pages that use clickbait and exaggerated titles. Eventually, this promoted pages that specialize in these aspects, which led to an array of disapproval against the platform.
YouTube changed policies to prevent harassment in the platform in June 2019, so it can ultimately remove hateful supremacist ideologies.
Critics and fans stepped up to suggest possible solutions for the platform to perform better in terms of harassment.
Some recommend holding content creators accountable for what their followers do resulting from what they suggest in their channels. Doing this could better the creator’s overall behavior since it forces them to evaluate both intent and outcome.
Then, the platform should apply different standards for creative content depending on how many followers they have. Those with more followings should stand by higher standards than those with less.
YouTube has a partner program to hold standards in how users can earn revenue, but the rules remain vague.
Consequently, top executives were told to practice better transparency with their judgments. Hate speech still spreads around the platform like a plague, but the company hasn’t published policies against it.
It resulted in pulling out documentaries about white supremacy for journalistic or academic purposes.