A study found that crowdsourced groups can stack-up to pro fact-checkers.

Between the anti-vaxxers taking horse medicine to fight COVID-19 and Trump supporters still pushing The Big Lie, the fight to curb online misinformation may feel hopeless at times.

A new study by researchers at MIT may be a small ray of light we can really use right now: Many people actually do have a pretty good bullshit detector when it comes to online misinformation.

The study, titled “Scaling up Fact-Checking Using the Wisdom of Crowds,” has found that crowdsourced fact-checking for accuracy from regular, everyday news readers stacks up to the work performed by professional fact-checkers.

Or, as MIT is putting it, crowds can “wise up” to fake news.

For the study, MIT researchers hired 1,128 U.S. residents using Amazon’s Mechanical Turk, which is the ecommerce giant’s marketplace platform where users can hire online gig workers for odd jobs and menial tasks.

Researchers then presented the participants with 20 headlines and lead sentences from 207 news articles that Facebook had flagged for fact-checking via its algorithm. Participants were asked questions in order to create an accuracy score for each story, related to how much of the news item was
“accurate,” “true,” “reliable,” “trustworthy,” “objective,” “unbiased,” and “describing an event that actually happened.”

The stories were picked out by Facebook for a variety of reasons. Some were flagged for possible misinformation, others popped on the radar because they were receiving lots of shares, or they were about sensitive health topics.

Researchers also game the same flagged stories to three professional fact-checkers.

The pro fact-checkers didn’t even always align with each other. All three fact-checkers agreed on the accuracy of a news story in 49 percent of cases. Two fact-checkers agreed on around 42 percent. In 9 percent of the cases, all three had disagreed on the ratings.

However, the study found that when they broke the normal readers into groups of 12 to 20 news readers and adjusted the makeup to even out the number of Democrats and Republicans in each, the laypeople’s accuracy ratings correlated with the fact-checkers.



“One problem with fact-checking is that there is just way too much content for professional fact-checkers to be able to cover,” says the co-author of a paper detailing the study, Jennifer Allen, who is also a PhD student at the MIT Sloan School of Management. “The average rating of a crowd of 10 to 15 people correlated as well with the fact-checkers’ judgments as the fact-checkers correlated with each other. This helps with the scalability problem because these raters were regular people without fact-checking training, and they just read the headlines and lead sentences without spending the time to do any research.”

“We found it to be encouraging,” she said.

According to the study, the estimated cost of readers evaluating news in this manner around $0.90 per story.

People who took part in the study also participated in “a political knowledge test and a test of their tendency to think analytically.” Those who scored well in those tests placed most alongside the fact-checkers’ accuracy scores. Overall, the ratings of people who were better informed about civic issues and engaged in more analytical thinking were more closely aligned with the fact-checkers.

Mainstream social media platforms have recently dabbled in crowdsourced fact-checking. Twitter, for example, launched Birdwatch at the beginning of the year. The program allows users to add contextual information to tweets that could be misleading or potentially spread misinformation.

The study is positive news in the sense that everyday newsreaders appear to be able to, mostly, suss out misinformation. However, at scale, one would have to certainly consider bad actors deliberately trying to confirm or perpetuate misleading information.

“There’s no one thing that solves the problem of false news online,” says MIT Sloan profession and senior co-author of the study, David Rand. “But we’re working to add promising approaches to the anti-misinformation tool kit.”

©