Facebook touts its partnership with outside fact-checkers as a key prong in its fight against fake news, but a major new Yale University study finds that fact-checking and then tagging inaccurate news stories on social media doesn’t work.
The study, reported for the first time by POLITICO, found that tagging false news stories as “disputed by third party fact-checkers” has only a small impact on whether readers perceive their headlines as true. Overall, the existence of “disputed” tags made participants just 3.7 percentage points more likely to correctly judge headlines as false, the study said.
The researchers also found that, for some groups—particularly, Trump supporters and adults under 26—flagging bogus stories could actually end up increasing the likelihood that users will believe fake news.
That’s because the sheer volume of misinformation that floods the social media network makes it impossible for the fact-checking groups Facebook has partnered with—like Politifact, FactCheck.org and Snopes.com—to address every story. The existence of flags on some—but not all—false stories made Trump supporters and young people more likely to believe any story that was not flagged, according to the study published Monday by psychologists David Rand and Gordon Pennycook.
“I think these results make it unclear whether it’s even net positive,” Rand said of the program.
Overall, Rand said that the study showed that providing fact-check labels on Facebook does not strongly move the needle either way. “All of these effects are tiny. Even to the extent it’s doing anything, it’s a small effect,” Rand said. “It’s not nearly enough to solve this problem.”
Presented with the study, a Facebook spokesperson questioned the researchers’ methodology—pointing out that the study was performed via Internet survey, not on Facebook’s platform—and added that fact-checking is just one part of the company’s efforts to combat fake news. Those include “disrupting financial incentives for spammers, building new products and helping people make more informed choices about the news they read, trust and share,” the spokesperson said.
The proliferation of fake news on social media — including deliberately false stories aimed at discrediting political candidates — raised widespread concerns during the 2016 presidential campaign. The stories seemed to establish social media as a powerful new tool for disinformation, capable of undermining political discourse and even triggering real-world violence. That point was brought home when a North Carolina man fired a gun in Washington’s Comet Ping-Pong pizzeria, believing false stories that it was the site of a child-prostitution ring run by Hillary Clinton.
Having outside fact-checkers vet the stories appeared to be a promising solution because it freed social-media organizations from having to decide for themselves whether a story is fake — a responsibility they are wary of assuming, and one that raises fears of censorship.
Facebook has said its efforts to reduce false news are working, but declined to provide any underlying data. The social media giant’s refusal to share information has frustrated some fact-checkers. “I’m hoping Facebook will see this study and determine that it is even more appropriate for them to share data as to how this is actually going,” said Alexios Mantzarlis, the director of the Poynter Institute’s International Fact-Checking Network.
Rand also said that he wished Facebook would share more data, but added that it’s not clear whether the company has information that could address precisely what he and Pennycook studied. Facebook knows who shares and clicks on stories, but Rand said the value of his and Pennycook’s work is that it assessed whether people actually believe the false headlines.
To conduct the study, which involved more than 7,500 people, the researchers presented participants in a control group with 24 randomly mixed together headlines, 12 of them true and 12 of them false. The psychologists asked participants to rate the accuracy of the headlines, which were all pulled from stories that were posted on Facebook in 2016 or 2017. In this control group, participants correctly judged real news stories as accurate 59.2 percent of the time, while they incorrectly believed false stories 18.5 percent of the time.
The psychologists then repeated the experiment with additional groups, except this time flagged six of the 12 the fake news stories as “disputed.”
In the initial control group, supporters of President Donald Trump believed that 18.5 percent of false headlines and 58.3 percent of real stories were accurate. The existence of flags made them 2.9 percentage points more likely to correctly judge a “disputed” story as false, but also 1.8 percentage points more likely to think an unflagged fake news story was true. They became 1.2 percentage points more likely to correctly judge real stories as accurate.
While it may appear from those numbers that the flags help at least a little, Rand said that he’s concerned that the volume of unflagged fake news is so high that the negative impact from those stories overwhelms any benefit. “It’s so much easier to produce fake news than it is to actually track down what’s fake or not,” Rand said.
For Clinton supporters, who started at a baseline of 18.5 percent on false news and 60 percent on true news, adding flags made them 4.3 percentage points more likely to correctly identify fake news, with a negligible backfire. They were 2.3 percentage points more likely to correctly judge accurate news as real.
Part of the reason for the discrepancy between Trump and Clinton supporters may come from attitudes toward media, Rand said. As part of the study, Rand and Pennycook asked participants how much they trusted third-party fact checkers. On a scale of 1-5, with five being more trusted, Clinton supporters came in at 3.1, and Trump supporters at 2.4.
But the findings that most surprised Rand and Pennycook were those for people 18 to 25 years old. The researchers did not even break their results out by age until they pumped their data through a machine learning algorithm and it picked up on the pattern.
In the control with no flags, the 18 to 25-year-olds believed 21.1 percent of the false headlines and 57.1 percent of the true stories. The existence of flags made young people 3.1 percentage points more likely to correctly identify flagged stories as being false, but they also became 4.4 percentage points more likely to believe unflagged false headlines. They were 3.2 percent more likely to believe the real headlines.
“I’m not sure what explains this, but it means that this is a really big problem,” Rand said. “For the people who are most reliant on social media for their news, they are the ones who the warning doesn’t do anything and there’s a huge backfire.”
Rand guessed that the results could be related to declining trust in media among young people, but said he planned to study the issue further.
Rand and Pennycook also ran a version of the study where they displayed publications’ logos prominently next to the headlines, to simulate how Facebook recently began displaying large publisher logos near stories to heighten readers’ awareness of the source. The researchers found that the logos had no effect. “Slapping down a big logo also doesn’t do anything,” Rand said. “People don’t find mainstream media outlets particularly credible.”
In his statement, the Facebook spokesperson highlighted that the study was conducted using a survey website, not performed with users on Facebook interacting with the platform. “This is an opt-in study of people being paid to respond to survey questions; it is not real data from people using Facebook,” he said.
Rand acknowledged that not being able to conduct the study within the Facebook ecosystem was “a limitation,” though he added that participants in the study did not know its subject in advance, so could not have been overly self-selected. “There are numerous studies showing that respondents from the website we used—Amazon Mechanical Turk—are pretty representative for doing studies on political opinions,” he said.
The Facebook spokesman added that the articles created by the third party fact-checkers have uses beyond creating the “disputed” tags. For instance, links to the fact checks appear in “related article” stacks beside other similar stories that Facebook’s software identifies as potentially false. They are “powering other systems that limit the spread of news hoaxes and information,” the spokesperson said.
Mantzarlis, from Poynter’s International Fact-Checking Network, added that when a false story gets fact-checked, Facebook’s software limits its visibility, slowing its spread. Mantzarlis said he had several other questions about the study, which has not yet been peer-reviewed. For instance, the study did not allow participants to click through and read the whole fact-check articles—would being able to do so have changed the results?
He added, though, that he supported any analysis of the fact-checking program. “This is exactly the type of research we need,” he said. As one solution to the problems identified in the study, he suggested that Facebook could do more to explain to users that not all false stories receive flags.
As for the findings that “disputed” tags have only a small impact, he said, “What I stress to people who are looking at whether they want to do this type of journalism is that any percent is a good percent, any correction is worth the work. Would it have been better if this found that it had a larger effect for fact checking? Yeah, it would have been a more encouraging sign.”
“Because this is the first research on the topic, he added, “I’m not yet ready to say this is not worth the time.”
Powered by WPeMatico