Essays on Understanding and Combating Misinformation at Scale

In Chapter 1, I explore the use of crowdsourcing as a potential solution to the misinformation problem at scale. Perhaps the most prominent approach to combating misinformation is the use of professional fact-checkers. This approach, however, is not scalable: Professional fact-checkers cannot possib...

Full description

Bibliographic Details
Main Author: Allen, Jennifer
Other Authors: Rand, David G.
Format: Thesis
Published: Massachusetts Institute of Technology 2024
Online Access:https://hdl.handle.net/1721.1/155882
https://orcid.org/0000-0002-9827-9147
Description
Summary:In Chapter 1, I explore the use of crowdsourcing as a potential solution to the misinformation problem at scale. Perhaps the most prominent approach to combating misinformation is the use of professional fact-checkers. This approach, however, is not scalable: Professional fact-checkers cannot possibly keep up with the volume of misinformation produced every day. Furthermore, many people see fact-checkers as having a liberal bias and thus distrust them. Here, we explore a potential solution to both of these problems: leveraging the “wisdom of crowds'' to make fact-checking possible at scale using politically-balanced groups of laypeople. Our results indicate that crowdsourcing is a promising approach for helping to identify misinformation at scale. In Chapter 2, joint with David Rand and Cameron Martel, I extend work on crowdsourced fact-checking to assess the viability of crowdsourcing in an opt-in, polarized environment. We leverage data from Birdwatch, Twitter’s crowdsourced fact-checking pilot program, to examine how shared partisanship affects participation in crowdsourced fact-checking. Our findings provide clear evidence that Birdwatch users preferentially challenge content from those with whom they disagree politically. While not necessarily indicating that Birdwatch is ineffective for identifying misleading content, these results demonstrate the important role that partisanship can play in content evaluation. Platform designers must consider the ramifications of partisanship when implementing crowdsourcing programs. In Chapter 3, I examine the role of online (mis)information on US vaccine hesitancy. I combine survey experimental estimates of persuasion with exposure data from Facebook to estimate the extent to which (mis)information content on Facebook reduces COVID vaccine acceptance. Contrary to popular belief, I find that factually-accurate vaccine-skeptical content was approximately 50X more impactful than outright false misinformation. Although outright misinformation had a larger negative effect per exposure on vaccination intentions than factually accurate content, it was rarely seen on social media. In contrast, mainstream media articles reporting on rare deaths following vaccination garnered hundreds of millions of views. While this work suggests that limiting the spread of misinformation has important public health benefits, it highlights the need to scrutinize accurate-but-misleading content published by mainstream sources.