The Facebook logo is seen on a mobile device in this photo illustration on January 20, 2019. (Photo by Jaap Arriens/NurPhoto via Getty Images)

Editor’s Note: Jeremy Cone is an assistant professor of psychology at Williams College. Melissa J. Ferguson is a professor of psychology and senior associate dean of social sciences at Cornell. Kathryn Flaharty, laboratory manager at the Developmental Cognitive Neuroscience Lab at Georgetown University, contributed to the research. The opinions expressed in this commentary are their own.

Amid a continued public outcry over the influence of fake news and misinformation, tech companies are scrambling to generate effective solutions. Just last month, Mark Zuckerberg testified about what Facebook was doing to address the safety of users’ private information, and there continues to be calls for social media companies to do more to curb the spread of misinformation.

Facebook’s approach has been to employ fact-checkers to help identify dubious content. Although questions remain about how fact-checkers can successfully identify misinformation, vetting the truth of online content is a critically important strategy for stemming the tide of misinformation.

Our new research shows that fact-checking prevents misinformation from shaping our thoughts — even our automatic and uncontrollable perceptions. When fact-checking calls out what isn’t credible, much of the influence that may have been done to our perceptions is undone. Fact-checking works, if done properly, and it needs the support of tech companies.

In a paper we recently published, we focused on how new information about an individual affects participants’ opinions and feelings about that individual. Across seven experiments with over 3,100 participants, we determined not only their consciously reported feelings, but also their automatic, gut-level reactions.

In one set of experiments, participants learned a considerable amount of positive information about a stranger named Kevin. Next, they discovered that he had been arrested several years ago for domestic abuse of his ex-wife. In between, we measured participants’ automatic, gut-level feelings toward him. We used a computer-based measure that presents an image of the person (Kevin) very quickly, and then another neutral, unrelated and ambiguous image that the subject is asked to rate as pleasant or not (e.g., a Chinese ideograph that we ensure none of the participants can actually read). Across many trials, we measured whether the presence of Kevin (vs. some other stranger’s face) made subjects more or less likely to say the neutral images were pleasant. Essentially, this measure captured subjects’ automatic feelings toward Kevin by seeing how an image of Kevin affected their feelings about some unrelated image.

We also varied the reliability of this new evidence. For those subjects who learned that the information was obtained from police records, their rapid, gut-level responses toward Kevin instantly became much more negative. A different group of subjects, however, learned the information was from a more questionable source: a friend of Kevin’s ex-girlfriend, who may have had an ulterior motive to spread gossip about him. This group of participants maintained their positive gut-level reactions toward Kevin, even in the face of his alleged crime. In other words, whether participants thought this new information was true determined even their automatic feelings. And, in a separate experiment, even if participants initially thought the information was true and only later discovered that it was from a questionable source, they maintained their positive feelings about Kevin.

Of course, the targets of misinformation campaigns are sometimes people we already know a lot about. Are people’s gut-level reactions toward more well-known targets also affected by the credibility of the information they encounter? To test this possibility, we asked a new group of participants to read misinformation about a well-known and well-liked male celebrity. The information was similar in content to our earlier studies, except that it had been doctored to look like it came from an unknown source on social media.

As a testament to the power of a single exposure to misinformation, for the half of our subjects who were not told the story was fake until the very end of the experiment, their automatic reactions immediately became quite negative, even though they had, on average, a positive impression of the celebrity just prior to exposure. However, for the other half of our subjects who were informed immediately after they read the story that it was fake, there were no changes in their gut-level reactions at all. The mere knowledge that the story was inaccurate was enough to undo the effects of exposure.

What these findings suggest is that telling people about the credibility of new information is crucially influential on the impact of that information. We are not mere lab rats shaped by the false information we encounter. Even our gut-level, automatic reactions are powerfully impacted by our conscious beliefs about the reputation and trustworthiness of sources of information.

To be sure, sometimes fact-checking may not be sufficient. In several of our studies, even when participants encountered information that was clearly questionable, there were still changes in their automatic reactions; the effect was merely attenuated. This tells us that fake news can still be consequential, even if a lack of credibility can mitigate its effects.

Second, although this research suggests that the subjective believability of information determines its impact, we still need continued research on identifying the factors that lead people to regard information as believable or not. We can learn from the anti-vaccination movement and climate change denialism that the actual credibility of information is sometimes incongruent with its perceived credibility in the eyes of different beholders.

Finally, there is still the controversial question of who is responsible for doing the fact-checking. Recent research found that crowd-sourcing assessments of the credibility of news sources — rather than putting these decisions in the hands of potentially biased arbiters — could be effective.

Tech companies struggling with how to respond to misinformation should support and value their fact-checkers. They may be the cyber pillars we need to resurrect our democracy.