In an article published in the Journal of Applied Research in Memory and Cognition, Center for an Informed Public researchers and colleagues present the promising results of three experimental investigations testing the potential effectiveness of a novel, user-driven approach to addressing online misinformation. The new approach, as detailed in “Social truth queries: Development of a new user-driven intervention for countering online misinformation,” involves responding to posts containing false information with “truth queries,” or questions that draw attention to truth or criteria used to judge truth, such as the presence of evidence or the credibility of the information’s source. Examples of truth queries include “How do you know this is true?,” “Where did you learn this?,” and “Do you have an example?” These questions are intended to alert other users to pay more attention to the accuracy of the false information and communicate that the information is not universally accepted.

The use of social truth queries as a misinformation intervention is unique in several ways. First, it can be implemented by users. In addition, it can be used in situations where using a traditional fact-checking approach may not be possible, for example, with newly emerging misinformation that requires a quick response or for misinformation is anecdotal or too vague to be effectively fact-checked. This approach may also be more appealing to individuals who are hesitant to directly correct and confront their peers online. 

In a series of three experimental studies, researchers asked participants to read a series of tweets constructed by the researchers. Some of these tweets contained information found to be false through fact-checking websites. Tweets containing false information appeared either with no reply, a reply containing a truth query, or a reply unrelated to the truth. For each tweet, participants were asked to either judge the truth of the information or indicate how likely they would be to share the information online. 

Key findings:

  • The presence of a user reply containing a social truth query consistently reduced participants’ belief in posts containing false information compared to when the same posts appeared with no replies or a reply unrelated to truth. 
  • The presence of a user reply containing a social truth query also consistently reduced participants’ intent to share these posts.
  • A variety of different truth queries were effective, indicating that the form these replies can take are flexible and that the same questions may work on a variety of different posts.

Overall, the researchers concluded in the article that their “initial work provides promising evidence for the effectiveness of social truth queries, a simple, flexible, user-driven strategy for reducing the impact of misinformation online.”

The research was led by CIP postdoctoral scholar Madeline Jalbert with assistance from Morgan Wack, a former CIP researcher who is now an assistant research professor at Clemson University. The work was also supported by co-authors Pragya Arya, a doctoral candidate in Social Psychology at the University of Southern California, and Luke Williams, a former CIP undergraduate research assistant who is now a current graduate student in the UW Department of Psychology.

This project emerged through a collaboration of these academic researchers with The Centre for Analytics and Behavioural Change (CABC), a South African non-profit organization whose researchers use a variety of methods to track and address social harms online.

Drawing on the work of local experts and a group of dedicated volunteers, the CBAC has developed a set of modern solutions tailored to address the spread of problematic information, including election misinformation, xenophobic content, and gender-based violence. Their work focuses on opening dialogs with users promoting misinformation in the hope of inspiring sustainable change in online behavior.


PHOTO AT TOP by Ben Chun / Flickr via CC BY-SA-2.0 DEED