Over 80 fact-checking organisations have come together to list four simple ways YouTube could combat the rampant spread of misinformation on their platform, if it felt so inclined.
In an
“YouTube is allowing its platform to be weaponized by unscrupulous actors to manipulate and exploit others, and to organize and fundraise themselves,” the letter stated. Signed by dozens of fact-checking organisations from across the globe, U.S. signatories include
“Your
While COVID-19 misinformation is the most immediately obvious issue, The International Fact-Checking Network noted YouTube has hosted medical misinformation such as
“The examples are too many to count,” The International Fact-Checking Network wrote. “We are glad that the company has made some moves to try to address this problem lately, but based on what we see daily on the platform, we think these efforts are not working — nor has YouTube produced any quality data to prove their effectiveness.”
Last September, YouTube announced an update to its medical misinformation policy that would
YouTube declined to comment on whether it would be taking up The International Fact-Checking Network’s invitation to collaborate, but said in a statement to Mashable that it considers the situation to have “more nuance” than simply requiring more fact checking.
“Fact checking is a crucial tool to help viewers make their own informed decisions, but it’s one piece of a much larger puzzle to address the spread of misinformation,” YouTube spokesperson Elena Hernandez said in a statement to Mashable.
Hoping to staunch the tide of misinformation, The International Fact-Checking Network’s letter offered YouTube four simple suggestions on how it could stop facilitating the spread of misinformation. These are:
-
Commit to “meaningful transparency” on misinformation by supporting independent research, and publishing its full misinformation moderation policy — including the data powering its moderation algorithm.
-
Investing in independent fact-checking, while prominently debunking misinformation and providing context either superimposed on misleading videos or as extra video content.
-
Prevent YouTube’s algorithm from recommending videos by creators whose content is repeatedly flagged as disinformation (particularly where they monetise their content).
-
Expand its efforts to combat misinformation in languages other than English, and provide country-specific data. The International Fact-Checking Network noted that misinformation on YouTube flew under the radar particularly in non-English speaking countries.
YouTube told Mashable it currently works with international publishers to add third-party context in
Of course, The International Fact-Checking Network’s point is that YouTube’s current policies historically haven’t appeared terribly effective, and that more needs to be done. It also probably shouldn’t be recommending “borderline misinformation” at all.
“Over the years, we’ve invested heavily in policies and products in all countries we operate to connect people to authoritative content, reduce the spread of borderline misinformation, and remove violative videos,” said Hernandez. “We’ve seen important progress, with keeping consumption of recommended borderline misinformation significantly below 1% of all views on YouTube, and only about 0.11% of all views are of violative content that we later remove. We’re always looking for meaningful ways to improve and will continue to strengthen our work with the fact checking community.”
However, these seemingly small percentages add up to a lot when you consider that YouTube has over 2 billion monthly logged-in users.
“And every day, people watch over a billion hours of video and generate billions of views,”
That means videos with misinformation get tens of millions of views every day, no matter how YouTube tries to spin it.