53,99 €
Exploring the influence misinformation has on public perceptions of the risk and severity of crisis events
To what extent can social media networks reduce risks to the public during times of crisis?
How do theoretical frameworks help researchers understand the spread of misinformation?
Which research tools can identify and track misinformation about crisis events on social media?
What approaches may persuade those resistant to changing their perceptions of crisis events?
Communication and Misinformation presents cutting-edge research on the development, spread, and impact of online misinformation during crisis events. Edited by a leading scholar in the field, this timely and authoritative volume brings together a team of expert contributors to explore the both the practical aspects and research implications of the public’s reliance on social media to obtain information in times of crisis.
Throughout the book, detailed chapters examine the increasingly critical role of risk and health communication, underscore the importance of identifying and analyzing the dissemination and impact of misinformation, provide strategies for correcting misinformation with science-based explanations for causes of crisis events, and more.
Addressing multiple contexts and perspectives, including political communication, reputational management, and social network theory, Communication and Misinformation: Crisis Events in the Age of Social Media is an essential resource for advanced undergraduate and graduate students, instructors, scholars, and public- and private-sector professionals in risk and crisis communication, strategic communication, public relations, and media studies.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 560
Veröffentlichungsjahr: 2024
Edited by
Kevin B. Wright
Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial intelligence technologies or similar technologies.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of WarrantyWhile the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging‐in‐Publication DataNames: Wright, Kevin B., editor.Title: Communication and misinformation : crisis events in the age of social media / edited by Prof. Dr. Kevin B. Wright, George Mason University.Description: Hoboken : Wiley, 2024. | Series: Communicating science in times of crisis | Includes index.Identifiers: LCCN 2024031447 (print) | LCCN 2024031448 (ebook) | ISBN 9781394184941 (paperback) | ISBN 9781394184958 (adobe pdf) | ISBN 9781394184965 (epub)Subjects: LCSH: Communication in crisis management. | Misinformation. | Social media.Classification: LCC HD49.3 .C655 2024 (print) | LCC HD49.3 (ebook) | DDC 658.45–dc23/eng/20240729LC record available at https://lccn.loc.gov/2024031447LC ebook record available at https://lccn.loc.gov/2024031448
Cover Design: WileyCover Image: © AerialPerspective Works/Getty Images
In Memory of H. Dan O’Hair
friend, mentor, and crisis communication pioneer
Kevin B. Wright (PhD, University of Oklahoma, 1999) is a professor in the Department of Communication at George Mason University where he teaches classes on health communication, crisis communication, and social media. He has over 20 years of experience as a communication scholar and is the author of eight books (including Health Communication in the 21st Century, published by Wiley Blackwell), 115 scholarly journal articles and book chapters, and over 120 papers presented at national and international communication and public health conferences. Much of his research has focused on online social support and health outcomes, online health information and misinformation, risk and crisis communication, and online social networks and health. In addition, he has served as a journal editor for the Journal of Computer‐Mediated Communication, published by the International Communication Association, and is a frequent reviewer for numerous journals, including Health Communication, Journal of Health Communication, and Journal of Medical Internet Research (JMIR).
Juliana L. BarbatiDepartment of CommunicationUniversity of Arizona
Porismita BorahEdward R. Murrow College of CommunicationWashington State University
Sydney CarverDepartment of CommunicationGeorge Mason University
Josh ComptonDepartment of Speech CommunicationDartmouth College
John CookMelbourne Centre for Behaviour ChangeUniversity of Melbourne
Christopher M. DobmeierSchool of CommunicationNorthwestern University
Kaylin L. DuncanDepartment of Communication StudiesUniversity of Alabama
Sarah A. GeeganDepartment of CommunicationUniversity of Kentucky
Yan HuangJack J. Valenti School of CommunicationUniversity of Houston
Bobi IvanovDepartment of CommunicationUniversity of Kentucky
Sojung Claire KimDepartment of CommunicationGeorge Mason University
Eunsung LeeDepartment of Media and CommunicationSungkyunkwan University
Jiyoung LeeDepartment of Media and CommunicationSungkyunkwan University
Tong LinDepartment of CommunicationUniversity of Maryland
Sai Datta MikkilineniCollege of Communication & Information SciencesThe University of Alabama
Xiaoli NanDepartment of CommunicationUniversity of Maryland
Caitlin B. NealHubbard School of Journalism and Mass CommunicationUniversity of Minnesota Twin Cities
Kimberly A. ParkerDepartment of CommunicationUniversity of Kentucky
Stephen A. RainsDepartment of CommunicationUniversity of Arizona
Sergei A. SamoilenkoDepartment of CommunicationGeorge Mason University
Cuihua (Cindy) ShenDepartment of CommunicationUniversity of California, Davis
Rongwei TangHubbard School of Journalism and Mass CommunicationUniversity of Minnesota Twin Cities
Kathryn ThierDepartment of CommunicationUniversity of Maryland
Ida TovarSchool of MedicineUniversity of Utah
Cindy TurnerSchool of MedicineUniversity of Utah
Emily K. VragaHubbard School of Journalism and Mass CommunicationUniversity of Minnesota Twin Cities
Nathan WalterSchool of CommunicationNorthwestern University
Yuan WangDepartment of CommunicationUniversity of Maryland
Weirui WangDepartment of CommunicationFlorida International University
Echo L. WarnerCollege of NursingUniversity of Utah
Kevin B. WrightDepartment of CommunicationGeorge Mason University
Haoning XueDepartment of CommunicationUniversity of Utah
Kun YanDepartment of CommunicationUniversity of Arizona
Jessica A. ZierSchool of CommunicationNorthwestern University
The study of human communication has changed considerably since the turn of the century, with the advent of the Internet, the proliferation of social media platforms, and the many changes in human interaction and information exchange they led to. At the same time, we have witnessed a number of significant crises globally, including climate change, the COVID‐19 pandemic, natural disasters, terrorism, and political unrest. During times of crisis, people often turn to social media for information and social support, as both content producers and consumers. The lack of traditional gatekeepers on many social media platforms and Web 2.0 technology has allowed misinformation to spread quickly and widely during times of crisis. Such events and changes in social media have offered a variety of opportunities for scholars to study the development, spread, and impact of online misinformation during crisis events. For example, as we witnessed with COVID‐19, views of vaccinations and other preventive measures on social media were often influenced by the spread of misinformation according to differing political viewpoints and agendas, foreign adversary interference, and many other factors that contributed to the sea of misinformation across social media. Social media have become increasingly important sources for information in terms of health and crisis communication, and social media platforms often function as a convenient source of information during crisis situations. Such platforms, including Facebook, X (formerly Twitter), Instagram, and YouTube have accelerated information transmission in crisis contexts across social, cultural, and geographical boundaries. Real‐time (mis)information exchange occurs rapidly online via a variety of social media platforms, which makes it challenging for government officials, researchers, public health experts, and other federal, state, and local government entities to identify and potentially correct misinformation about risk and crisis situations.
Communication plays an increasingly critical role in crisis events, particularly in identifying and analyzing the dissemination and impact of misinformation on various segments of the population and in correcting misinformation and/or replacing it with scientific, evidence‐based explanations of causes as well as solutions to reduce harm. Social media's ability to tailor information in very specific ways may lead audiences to be exposed to relatively limited or politically biased information about the causes of crisis events or the best policies or course of action to mitigate risk to individuals. During times of crisis, the public may increase their reliance on social media to obtain information from these platforms and from social network members they trust. (Mis)information) is rapidly shared, reshared, and commented on by others online as it is frequently disseminated across multiple social media platforms (e.g., Facebook, X (Twitter), TikTok). Individuals and their social media network members vary in their education level and scientific literacy, and this affects their interpretations of scientific information reported by government officials or the media and their perceptions of the risk and severity of crisis events as well as their behaviors. While social media serve many useful functions by providing multiple sources of good information to the public during a crisis event, there is substantial evidence that misinformation about crisis events has spread rapidly on social media, including information about climate change, vaccinations, and COVID‐19. A growing number of researchers have demonstrated, however, that it is possible to preemptively warn social media users about the spread of misinformation during crisis events and also to correct misinformation.
This book addresses these issues and many others in the study of online misinformation during crisis events. The contributors to this volume have extensive research experience of communication of misinformation on social media during times of crisis. They focus on a variety of important questions and issues related to social media misinformation during times of crisis. For example, to what extent can social media play a role in reducing risks to the public during times of crisis? What theoretical frameworks are useful in understanding the spread of misinformation and correcting misinformation on social media during crisis events? What research tools and approaches (e.g., information correction, big data analysis) can researchers use to identify and track misinformation about crisis events on social media? What approaches are most useful for reaching segments of the population who have extreme political views or who may be resistant to changing their perceptions of crisis events?
Kevin B. Wright
Christopher M. Dobmeier, Jessica A. Zier, and Nathan Walter
School of Communication, Northwestern University
Crisis and misinformation go hand in hand with what some may call a marriage made in hell. To be sure, during times of crisis, risk and uncertainty are elevated, while the quality and veracity of information tends to drop, contributing to an atmosphere ripe for rumors, misinformation, and outright deceptions. Adding to this mix are a plethora of social media platforms that tend to favor sensational and emotion‐laden content; this is another way to say that this is a major problem.
In discussing crisis misinformation and social media, it is important to put some hard truths on the table. First, misinformation is as old as its more responsible sibling, information, so it is important to firmly foreground any discussion on misinformation in a historical context. Second, because the label of “misinformation” is used to cover a wide array of content, from minor inaccuracies and harmless hoaxes to vicious propaganda and conspiracy theories, the need to define and distinguish different types of misinformation is a challenge in and of itself. Third, that for millennia individuals, groups, and entire societies have fallen victim to misinformation strongly suggests that humans may have a collective blind spot when it comes to accepting untruths. Fourth, whether one supports that technology fundamentally changes humans or it simply allows humans to change, misinformation has an undeniable strong reaction in conjunction with information technology, most recently social media.
Broadly speaking, the structure of this chapter corresponds with these four hard truths. We begin with a brief historical review showing how our understanding of misinformation both is affected by and transcends social and technological evolutions. This review is followed by an attempt to define misinformation, which examines both its general features and its common taxonomies. The bulk of the chapter, however, outlines some of the psychological, sociological, and technological factors that perpetuate humans' susceptibility to misinformation. Then we focus on a series of very different case studies to illustrate how and why misinformation spreads on social media. The chapter concludes with a deceptively optimistic prognosis for social recovery.
Human beings are the animal that cannot become anything without pretending to be it first.
—W. H. Auden (1907–1973)
While a comprehensive review of the history of misinformation or every lie ever told is beyond the scope of this chapter, it is difficult to grasp the role played by misinformation in times of crisis without some historical context. Consider the astonishing fact that humans have spent nearly 99% of their history as hunter‐gatherers, living in small groups and dividing their time between fighting and fleeing from predators or other competing groups. This lifestyle highlighted two very important but often scarce resources—food and shelter. Indeed, there are good reasons to suspect that much of human communication in ancestral times revolved around securing these resources. When food and shelter are the two main concerns an individual must grapple with, knowingly or unknowingly misinforming their group is likely to lead to severe consequences. For instance, if they were to tell their tribe that a poisonous berry was edible, that sabretooth cats are docile, or that there was an elephant near the lake but omit to add that hyenas as big as bears also gather there, the result was likely to be swift and bloody. This may explain why estimated murder rates within hunter‐gatherer societies were often over 10% (Rosling, Rosling, & Rosling, 2018). Simply put, misinformation had no place during these times.
Over the 10 millennia from the end of hunter‐gatherer society to the Greek empire's rise, misinformation evolved considerably. As information gained functions well beyond mere survival, misinformation also served more purposes. One such purpose was found in the ancient Greeks' art of storytelling, and there was no better storyteller than Herodotus (484–425 BCE), who was known as “the father of history.” Despite this, depicting Herodotus as a historian is both wrong and a paradox: the label is a misnomer because Herodotus' accounts included more fiction than fact, and a paradox because the English word “history” owes its origin to Herodotus' travelogues, called Histories, which are an entertaining cocktail of exaggerated encounters and fanciful tales, garnished with a tiny drizzle of facts. From camel‐eating ants in Persia to a 300‐foot‐thick wall in Babylon, Herodotus had a penchant for not spoiling a good story with facts. Although this part‐historian part‐fantasist was criticized by his contemporaries for telling lies (Baragwanath & de Bakker, 2012), he was also celebrated as a brilliant storyteller who had an immense influence on generations of orators, particularly those embracing the gray area between embellished truth and deception.
Although early misinformation was characterized by playful and even comical humbuggery, it eventually took a dark turn in the twentieth century, ushering in a new era of far more serious and consequential deceit. As global tensions grew in the lead‐up to World War I, misinformation took on a new meaning as European countries began devoting considerable resources to voluntary military recruitment. The result of these efforts was the first large‐scale and modern attempt at propaganda (from the Latin propagare, to disseminate), which demonized enemies and justified the government's cause. The scale and magnitude of the propaganda machine during World War I ensured that falsehoods about the enemy traveled far and wide, and with great consequences.
Over the next two decades, the traditional propaganda tools of World War I, such as newspapers, leaflets, and full‐color posters, were supplanted by more sophisticated media technologies, including radio and film. Although propaganda had become virtually unavoidable during World War I, its influence and impact somehow intensified during the ensuing decades. After Adolf Hitler took over the reins of the national government in 1932, for instance, one of his first political moves was to establish the Ministry of Propaganda and Public Enlightenment, which he placed in the hands of Joseph Goebbels. This meant that for the first time in history a country at peace would have a propaganda ministry, or a lie factory, to glorify its ideology and dehumanize its enemies. Meanwhile, in the West the propaganda machine was picking up steam as well, with award‐winning directors such as Frank Capra and John Huston being recruited to create “orientation” (a fancy word for propaganda) films for the US Department of War. One such orientation film series, Why We Fight, had been viewed by at least 54 million Americans by the end of the war (Rollins, 1996). While empirical evidence on its impact in swaying public opinion was inconclusive, its cultural significance in offering a coherent and memorable rationale for a total war is undeniable.
Since the emergence of social media, the misinformation playbook has only become more complex. While social media platforms have been associated with the democratization of information and mass communication, they have also opened a Pandora's box of potential threats to democracy. These matters came to the fore when Russia was found meddling in the 2016 US presidential election by employing bots—spam accounts that post autonomously using preprogrammed scripts—to spread misinformation and even hijack civic conversations across social media, sowing distrust in electoral procedures and polarizing constituents (Howard, 2018). In an instant, this so‐called computational propaganda became an international hazard, as countries near and far grappled with the new cyberreality (Woolley & Howard, 2018). As 2016 drew to a close, Oxford Languages offered the perfect epitaph by declaring its 2016 Word of the Year to be “post‐truth”: “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief” (Oxford Languages, 2016).
These new agents of misinformation, social media bots, are just the tip of the iceberg. Artificial intelligence (AI) and machine learning, for example, underlie new misinformation technologies such as the often playful but sometimes nefarious deepfakes, programs designed to manipulate pictures, videos, and audio to pass as authentic primary sources of information. As these technologies become more advanced, the misinformation they spread may become more convincing and harder to detect and to deter.
The picture that emerges from this brief overview illustrates that misinformation has been one of the true constants throughout human history from hunter‐gatherers to bots and AI. Although cultural, social, and technological environments change, the hold that misinformation has on humans has remained. So, rather than searching for a magical algorithm or a technological fix that would rid the world of misinformation, it is time to look in the mirror and understand what makes humans so vulnerable. To begin, it is necessary to know what exactly misinformation is.
You're saying it's a falsehood and … Sean Spicer, our press secretary, gave alternative facts to that.
—Kellyanne Conway
When Kellyanne Conway, counselor to President Trump, used the phrase “alternative facts” to defend the White House press secretary Sean Spicer's false statements about the attendance numbers for Donald Trump's inauguration, she was probably not aware that she was contributing yet another term to the already overcrowded concept of misinformation. Indeed, the exponential growth in misinformation has arguably been outpaced only by ill‐fated attempts to define it, as was painfully illustrated in a recent systematic review that yielded more than 30 distinct definitions of this very popular concept (Wang, Their, & Nan, 2022). A closer look at how the literature on misinformation has tackled the challenge of defining its subject of research reveals two distinct approaches: structural definition and taxonomy.
The structural approach attempts to define misinformation by highlighting the essential elements that make up the concept. Consider the following definition of misinformation: “Cases in which people's beliefs about factual matters are not supported by clear evidence and expert opinion” (Nyhan & Reifler, 2010, p. 305). Although similar definitions are commonly used in research and practice, it does not take too long to identify a number of loose ends. To start, the benchmarks for “clear evidence and expert opinion” are fuzzy: how clear should the evidence be, and who has the monopoly on expertise? For instance, next time a dentist who recommends flossing can rightfully be accused of spreading misinformation since the health benefits of routine flossing are not supported by scientific evidence, according to a meta‐analysis published by the Cochrane Library, a reputable hub for health information (Berchier et al., 2008). Similarly, one may sneer at an individual who refuses to wear a surgical mask while coughing on a crowded train, but the Surgeon General, Jerome Adams, tweeted “Stop buying masks!” not too long ago. As these examples show, not only do experts sometimes disagree, but the scientific basis for evidence is rarely clear.
Recent attempts have been made to improve existing definitions by emphasizing the dynamic nature of expertise and evidence. One promising attempt comes from Vraga and Bode (2020), who distinguish between different levels of expert consensus from controversial (e.g., astronauts are essential to the future of space programs) to settled (e.g., human activity contributes to climate change), as well as different levels of amount and quality of evidence from controversial (e.g., it is safe to eat foods grown using pesticides) to settled (e.g., there is no evidence that vaccines cause autism). Although this approach retains greater nuance than binary definitions, it has limitations, notably the rarity of near‐universal agreement and strong evidence during crises.
Moreover, the distinction definitions make between factual and nonfactual matters is key. On the face of it, the focus on fact‐based statements, which can be proved or disproved by objective evidence, makes total sense. On a second glance, however, this distinction raises several concerns. If only fact‐based or checkable claims can be labeled as misinformation, the vast majority of public discourse, including opinions, rumors, predictions, and general speculation, is off‐limits. This gray area between fact‐ and opinion‐based statements, however, is often what characterizes rumors, innuendo, and pseudoscience. Further, this concern is amplified by people's general tendency to conflate fact and opinion, as revealed by a Pew Research Center (Mitchell et al., 2018) poll showing that only 26% of US adults could accurately and reliably classify them. To this end, if people are unable to distinguish factual from nonfactual matters, the definition is unnecessarily limiting.
Furthermore, this definition of misinformation does not say much about intentionality. Typically, the notion of intentionality helps distinguish misinformation and disinformation, with the latter being defined as false information deliberately created and disseminated with malicious intent (Nan, Thier, & Wang, 2023), whereas the former focuses on instances where there is no clear evidence of intent to mislead or deceive. As with murder cases, where intent is often an essential element in securing a conviction, intent in spreading falsehoods is a major consideration. Antitobacco policymakers and campaigns were successful at dismantling the industry because they were able to show that tobacco companies deliberately deceived the public by minimizing the risk of smoking and by downplaying their responsibility to their customers. Although there is practical value in this distinction, intent is difficult to prove (at least not without litigation), leading some to conclude that “for the purpose of studying misinformation impact, [proving intent] … is not necessary” (Nan et al., 2023, p. 3).
Structural definitions of misinformation are useful in that they attempt to bound the concept by its underlying components, such as expertise, evidence, checkability, and intent. Still, given the variety of what can be labeled misinformation, perhaps it is an unbounded phenomenon. To address this problem, scholars have offered to pinpoint misinformation not by focusing on its structure but rather by outlining the variety of categories that constitute it. One noteworthy attempt has provided a typology of fake news using level of facticity (or the degree to which a message relies on facts) and author's intention to deceive as its delineating factors (Tandoc, Lim, & Ling, 2018). This typology is informative on a number of levels. First, it houses a wide variety of genres—from conventional (e.g., native advertising and fabrication) to unconventional (e.g., parody and news satire) misinformation. Dimensions of facticity and intent further help crystalize similarities and differences between types of fake news. According to this typology, for instance, both fabrication and news parody contain a low degree of facts (low facticity) but whereas the former intends to mislead, news parody (e.g., The Onion) forges ludicrous stories to provide a commentary on current affairs. Similarly, while native advertising and news satire tend to share a relatively high degree of factual basis (high facticity), they fulfill drastically different functions: selling products versus using humor and wit to offer constructive social criticism.
All things considered, researchers and practitioners will continue to wrestle with the definitional challenges of misinformation for the foreseeable future, especially as the demand for effective responses to its proliferation grows. Common to all types of misinformation, however, are underlying psychological mechanisms that make individuals vulnerable to their influence.
When thoughts flow smoothly, people nod along.
—Schwarz et al. (2016)
The need for deliberate misinformation control arises from the growing discrepancy between the rapid spread of false information and humans' cognitive capacity to recognize and counteract it. Their brains, with their complex thought, memory, emotional experiences, and information‐processing abilities developed over a 200,000‐year history, set humans apart from other species. Yet, misinformation has evolved swiftly, taking advantage of individuals' cognitive and emotional vulnerabilities. The six influences that follow exemplify the paradox that the qualities that define humanity also make humans prone to misinformation.
Derived from Craik's (1943) hypothesis that humans hold their own mental maps of the world as they experience it, mental models being internal representations of how the world operates (Johnson‐Laird, 2013). More concretely, mental models are intuitive systems that help an individual attend, contextualize, and establish links between prior knowledge and new information, which are used to understand the world before them. Consequently, mental modeling is what coheres all relevant pieces of information, allowing an individual to assess a scenario and ultimately take appropriate action. For example, they may understand the potential consequences of drunk driving through mental modeling (e.g., if they drink alcohol, they will be impaired; and if they drive while impaired, they may cause an accident).
One crucial pitfall of mental modeling is that it is also prone to connecting irrelevant or inaccurate cognitions to form an inaccurate—albeit coherent—inference, paving the perilous path for the flourishing of superstitions and conspiracy theories (Van Prooijen, Douglas, & De Inocencio, 2018). The moon landing conspiracy theory, for example, has survived decades of debunking because it provides people with a relatively simple mental model that easily explains away a lot of questions arising from the moon landing, such as why the iconic moon landing photo shows no stars or why the United States has not conducted another lunar landing since 1972 (for an illustration of the moon landing conspiracy theory mental model, see Figure 1.1). Although these questions often have sound answers, none are as simple and encompassing as the suggestion that the landing was faked by the US government. A well‐crafted conspiracy theory like this provides its victims with a coherent story to explain phenomena that are otherwise difficult or tedious to understand, ultimately exploiting the human preference for complete explanations (Korman & Khemlani, 2020). To this end, debunking conspiracy theories is a mammoth task that requires an alternative explanation not for a single piece of misinformation, but for a constellation of misinformation.
Figure 1.1 A mental model of the moon landing conspiracy theory. Successfully debunking the theory requires the reconciliation not only of its perceived consequences but also of its perceived anteceding motivations—a tall order for standard misinformation corrections.
As discussed, the reasoning process of mental modeling can be influenced by external factors such as the quality and completeness of information. Mental modeling may further be biased by one's internal motivations (Kunda, 1990). According to the motivated reasoning approach, individuals are not always motivated to reach the most accurate conclusion and often prefer explanations that align with their preexisting beliefs. Additionally, they tend to accept confirmatory information and to reject disconfirmatory information, which means that mental models are continually self‐reinforcing (Johnson‐Laird, 2013; Korman & Khemlani, 2020). Put succinctly, people do not always rationalize in rational ways.
Continuing with the moon landing example, a cynic who naturally distrusts the institutions of government and science is likely more susceptible to the moon landing conspiracy theory than the average person because the conspiracy aligns with their preexisting beliefs (i.e., it fits neatly into their mental model). Further, a cynic may hold implicit or explicit goals to confirm evidence that supports the conspiracy theory and/or to reject or avoid any information intended to debunk the conspiracy theory (e.g., Kahan, 2016; Kim & Cao, 2016). In tandem, these dispositions create the perfect environment for either misinformation or its corrections to thrive—but rarely both.
Memory is limited, malleable, and prone to error—whether such memories are mundane (e.g., what time one went to the bathroom last night) or significant (e.g., where one was when they received the news of the September 11 attacks). When it comes to communication, not only does one need to remember what was said, but maybe also who said it and what the response was to it, all of which influence the believability of a message. Such details may play a role in distinguishing fact from fiction; for example, if one received a message that President Bush knew about the September 11 attacks before they happened, one might notice that it was from a conspiratorial uncle and promptly ignore it. In this case, the untrustworthiness of the uncle functions as a discounting cue, a signal that indicates that the message should be taken with a grain of salt. However, discounting cues are useful only if they are remembered. One may come across misinformation and immediately discount it, but may later remember the misinformation without recalling why it had been discounted or that it was ever discounted. This phenomenon is known as the sleeper effect, and it makes people vulnerable to the delayed influence of misinformation (Priester et al., 1999).
Especially in crisis communication, conveying false or incomplete memory can not only lead to suboptimal public action or inaction but also fuel related conspiracy theories that undermine the public's trust in government. Bush's benign lapse in memory recall of the September 11 events heightened public speculation and conspiracy theories that the government had known about the attack before it happened (Greenberg, 2004). And, if one forgot that the person propounding such a theory was one's tin‐foil‐hat uncle (i.e., the discounting cue), one was just a bit more prone to believe it.
Echoing the assumptions of the sleeper effect, believability of misinformation falls not only on the message itself but also on the relationship between the messenger and recipient. Swindles, shams, hoaxes, and other fraudulent spectacles have been known throughout history, but their success is often not a testament to the stupidity of the individuals who fall for them but rather to the persuasiveness of the individuals who pedal them. Each fraudster—from the eponymous Charles Ponzi to InfoWars host Alex Jones—played a crucial role for their target audience: a credible source.
More particularly, these sources are seen as competent and trustworthy (O'Keefe, 2018). The InfoWars audience, for example, perceived Alex Jones to be knowledgeable, intelligent, and qualified to speak on the topics at hand and his reporting to be honest, unselfish, and fair. In other words, the audience expected Jones to provide them with accurate information and to have their best interests at heart. Of course, these source perceptions are all subjective: one person's pariah is another's prophet. Faulty information may be readily accepted by an audience that relies on the perceived expertise and trustworthiness of the communicator rather than deliberating on the argument (O'Keefe, 2018). Plainly, misinformation does not have to be compelling to be accepted so long as the person sharing it is compelling.
What comes to mind when you think of misinformation? Maybe you think of baseless claims about vaccines causing autism, of climate change as a ploy to push a progressive political agenda, or of the 2020 US presidential election having been stolen. It is no coincidence if all that comes to mind evokes strong negative emotions. Such affect accelerates and broadens the diffusion of misinformation across social media (Vosoughi, Roy, & Aral, 2018) and is therefore a key player in the prominence and longevity of misinformation. The main reason for this is the negativity bias: the human propensity to attend to, learn from, and use negative information rather than positive or emotionally neutral information (Soroka, Fournier, and Nir, 2019).
What makes the negativity bias so dangerous is that it influences the communication process on many levels. First, messages that appeal to negative emotions are more strongly attended to than those with appeals to positive or neutral emotions (Soroka et al., 2019). Second, messages that are not negatively valenced are more likely to turn negative when communicated from one person to another (Bebbington et al., 2017). And third, over time negative emotional experiences can potentially strengthen an individual's ability to recall misinformation (Lee et al., 2023). Taken altogether, messages with misinformation that evoke fear, disgust, or anger do not only linger in one's mind but beg to be communicated to others.
Susceptibility to misinformation is also influenced by the metacognitive cue of fluency, which relates to how easily we process information. Judgments and decisions often rely on metacognitive cues, such as the ease or difficulty of recalling information and generating supporting arguments. For example, if a friend asks one to tell them what one loves most about one's job and one struggles to produce any good examples, the metacognitive experience of difficulty is an indication that one may want to start looking for another job.
The ease with which misinformation is processed can contribute to its spread. Take, for instance, “the Big Lie,” the false narrative that the 2020 US presidential election had been stolen from Donald Trump and that the Republicans who certified President Biden's victory were complicit. While some estimates suggest that over one‐third of the US public continue to believe in widespread election fraud favoring the Democrats, it was a small clique of big liars that peddled these falsehoods in the media. Through repetition, the lie became more familiar and easier to process, and this eventually led people to support it. Put differently, people assume that a familiar opinion is a prevalent opinion; and a repetitive voice can sometimes sound like a chorus (Weaver et al., 2007), even if it is spreading dangerous lies. Overall, in combating falsehoods, not only their veracity should be considered but also the relative ease with which they are processed.
Collectively, these six influence mechanisms are products of human evolutionary history, which are deeply ingrained within one's cognitive architecture to assist in the interpretation of and interaction with the information in one's surroundings. The age of social media has unwittingly provided fertile ground for the exploitation of these innate tendencies. To gauge the full extent of the impact of these influences requires an examination of the compounding influence of social media, where the exploitation of human vulnerabilities not only exists but thrives because of its keystone function in the social media business model.
The heartbeat of any social media platform is the engagement of its user base. User engagement not only helps these platforms gain sociopolitical capital but is also necessary to sustain their business. Social media platforms rely heavily on advertising revenue; for instance, 97% of Facebook's 2022 global revenue came from advertising (Dixon, 2023). However, this attention and engagement‐centered business model works only if social media companies can demonstrate that their platforms attract enough eyeballs to justify advertising campaigns. In other words, user engagement—such as liking, commenting on, sharing, or otherwise interacting with posts—has become meticulously and granularly cataloged (Zuboff, 2019). The amassed data does not just feed in to advertisers' business but also to the creation of individualized cybernetic identities from users' online behaviors (Bollmer, 2018), by which platforms can generate an irresistible content feed for individual users to keep them on the platform for as long as possible.
These precise, user‐driven recommendations become a deeper issue when social media platforms act in their own interest at the expense of their customers. If one were to take a minute to scroll through the feed of one's favorite social media platform, paying close attention to the posts. Did we lose you? Oh, good, you're back! Which posts stood out the most? The odds are that one paid less attention to an acquaintance's baby shower updates and more to a conspiratorial uncle's expletive‐filled allegation that the government is spying on them through the neighborhood squirrels because it plays into the negativity bias. Social media companies are not oblivious to human tendencies or to the fact that they can be exploited. For example, Facebook whistleblower Frances Haugen revealed evidence that the company not only was aware of the impacts of polarizing and hateful content on its platform but that it deliberately chose not to mitigate the risks because ridding such negative content would negatively impact ad revenue (Zubrow, 2021). Given that archetypal misinformation is also negative and polarizing (albeit captivating), the inaction of Facebook in this case bodes well for its diffusion. Additionally, platforms such as X (formerly Twitter) have cut their information integrity teams, citing ad revenue drops as the reason for layoffs. Operations to mitigate misinformation are simply unprofitable.
The pitfalls of algorithmically curated social media are amplified in times of crisis, during which there is pressing demand from the public for real‐time information as events unfold. In one such crisis on November 24, 2017, people were suddenly evacuated from London's Oxford Circus during a Black Friday shopping event and told to take shelter in nearby shops. This led to chaos in the streets. Some people called the police, claiming to have heard gunshots, but the incident was a false alarm. Yet, on social media, content algorithms systematically amplified ambiguous, incorrect, and even propagandistic information, leading to explosive response (Eriksson Krutrök & Lindgren, 2022). According to the principle of fluency, the repetition of misinformation made such claims more certain. In highly uncertain crises, it is understandable how baseless claims gain traction.
The mental susceptibility of humans to misinformation and its exploitation by social media platforms have together created the perfect condition for the dissemination of misleading or downright deceptive messages. The following are some real‐world cases that have resulted from this thriving misinformation ecosystem, beginning with a conspiracy of epic proportions.
The early days of the COVID‐19 pandemic have provided a steady stream of jaw‐dropping moments, from the president of the United States musing about the possibility of injecting disinfectant as a deterrent to the virus to a star‐studded Instagram video of two dozen celebrities leading a singalong of John Lennon's “Imagine,” with the opening line “Imagine there's no heaven,” as the mortality rate soared and hospitals were overwhelmed with COVID‐19 patients. These moments pale in comparison to the slickly produced 26‐minute video called Plandemic, which claimed that “a shadowy cabal of elites was using the virus and a potential vaccine to profit and gain power” (Frenkel, Decker, & Alba, 2020). The video went online on May 4, 2020, and quickly gained steam across the media landscape from legacy outlets, Facebook, YouTube, and Vimeo to networks of shady websites. Just two weeks after it was digitally unleashed, Plandemic was estimated to have garnered more than a million interactions, with #Plandemic taking over Twitter and other social media platforms (Kearney, Chiang, & Massey, 2020).
While the video's production quality was not quite that of a Hollywood blockbuster, it told a gripping story. A discredited scientist, Judy Mikovits, cast as the protagonist, accuses the scientific community and pharmaceutical industry of burying her research showing the harms from vaccines and of deliberately trying to infect and then reinfect (with the help of surgical masks, of course) innocent people with COVID‐19, which, by the way, was not a novel virus but rather a direct outcome of a flu vaccine gone awry. Dr. Anthony Fauci, the nation's top expert on infectious disease during the early pandemic, was cast as the archnemesis who had been responsible for creating the pandemic. The video was thoroughly fact‐checked, always with the same conclusion, that the staggering claims in Plandemic were baseless, misleading, and false (Gorski, 2020). However, no amount of fact‐checking, rebuttals, and even censorship was able to stop this nonsensical, long‐winded, and poorly produced video from circulating. To experts in crisis communication, the popularity and virality of this video is anything but surprising. The 26‐minute nightmarish babble had been tailored to exploit some of the vulnerabilities that make misinformation so difficult to fend off.
To begin with, the video was designed to evoke strong negative emotions, which are known to generate engagement (Lee et al., 2023). In crisis communication, negative emotions such as anger, fear, and disgust are abundant and this was certainly true of the COVID‐19 pandemic. High public uncertainty and people anxiously sifting through contradictory information from media, public officials, and other experts were the perfect recipe for negative emotions. Humans are drawn to negative information (i.e., the negativity bias), and it is easy to understand why so many people were attentive to this alarmist and sensational content.
While the negativity bias may explain the attention given to the video, it does not fully account for why people accepted its false claims. To better understand this, another concept from the six degrees of influence is necessary. From the mental model perspective, although Plandemic made numerous outrageous claims, it imparted no information that conspiracy theory supporters did not already know. Consider the conspiracy theory claiming that pharmaceutical companies leverage political and cultural levers of power and corruption to increase profits while unsuspecting citizens suffer from expensive, ineffective, and outright dangerous treatments. As with most resilient conspiracy theories, the big pharma conspiracy contains a kernel of truth. Corporations, especially large ones, often act in immoral ways to maximize their profits, and health‐care disparities are well documented in many countries. But there is quite a distance between this sad reality and the idea of a cabal of evil elites and greedy doctors colluding to infect the public with disease only to sell expensive and dangerous treatments. Notwithstanding the inaccuracy of the claims, this conspiracy theory provides a highly coherent mental model. So, if one already believes that self‐interested corporations pay off politicians and doctors to take advantage of defenseless citizens while laughing all the way to the bank, it might not take much more to believe the accusations leveled in Plandemic.
The identity of Plandemic's lead character is also likely to have been influential. Although the film reviews Judy Mikovits's troubled career and downfall, she is not painted as the disgraced doctor whose scientific work was found to have involved lab contamination and was later retracted from leading academic journals, and who was discredited by her colleagues and the academic community. Rather, the film portrays Mikovits as a highly credible source and as a martyr who left her lab out of principle.
Goncharov, arguably the greatest mafia film ever made, is enjoying a resurgence in popularity on social media. In the film the Russian president, Vladimir Putin, offers his condolences to the widow of an opposition leader before a mysterious plane crash kills her husband, and Donald Trump introduces the Italian president, Sergio Mattarella, as President Mozzarella. What do these episodes have in common? They represent completely false events that can be easily discredited without very thorough fact‐checking. Unlike the Plandemic case study, however, these meme‐like examples are by and large innocuous and even somewhat amusing.
Many of the same reasons that humans are susceptible to serious harmful misinformation can also explain how individuals fall victim to seemingly benign hoaxes and rumors. This can even happen to self‐proclaimed experts in misinformation, including Nathan Walter, the third author of this chapter, who one day encountered a poster of a 1973 Martin Scorsese film titled Goncharov, dubbed “The greatest mafia movie ever made.” The initial suspicion (“How is it that I have never hear about this great Scorsese film?”) quickly dissipated when other users on social media discussed the film at length, arguing about its plot, soundtrack, and symbolism, and even referencing specific scenes (“Oh, that incredible flashback scene or how would the movie end if Katya had a gun in the hotel room?”). Hollywood mainstays such as Lynda Carter and Henry Winkler claimed that they attended the premiere together and tweeted a photo to prove it. Even Martin Scorsese, the renowned director of this must‐watch gangster movie, himself confirmed that he had made the film years ago. The only problem, of course, was that the movie was completely fake.
It is easy to see, from the principles discussed earlier, how one could fall for the well‐crafted Goncharov hoax. There was not much evidence to substantiate the existence of the film, and an evaluation of the meme—searching through the International Movie Database, scouting for corroborating evidence, and carefully evaluating the story's sources—would have revealed that it did not exist. But truth evaluations often do not rely on analytical thinking but rather on metacognitive cues such as “When it is easy, it seems familiar, and familiar feels true.” Hence, while there was no corroborating evidence for the existence of the film outside of social media, the poster of the film was reminiscent of those of other mafia films from the 1970s. The film's fictional cast—from Robert De Niro in the role of the Goncharov, a hitman with a heart of gold, to his love interest, played by Cybill Shepherd, and the antagonist, played by Harvey Keitel—all made sense as they were actors who would have been working with Scorsese during that period. Every piece of the story was accompanied by an intuitive believability and, hence, it was easy to accept it at face value without analyzing its different components too closely. If there had been anything about the story that felt wrong or at the very least highly surprising, it may have ended up being scrutinized more carefully. At first glance, however, every piece of the Goncharov hoax felt right and so I, Nathan Walter, along with millions of other film buffs, was fooled.
Turning away from fictional Russian mobsters, we now discuss the very real president of Russia. Engulfed in an unpopular and morally difficult to justify war with Ukraine, Russia has experienced an uptick in criticism of its president, Vladimir Putin. On June 23, 2023, longtime Putin associate Yevgeny Prigozhin launched an armed mutiny against the Russian government, seizing control over strategic positions in Russia. Prigozhin quickly turned from a friend to a foe, becoming a serious threat to Putin's grip on power. A few months after the rebellion, Prigozhin and nine other rebels were killed in a plane crash north of Moscow.
To those who follow Russian politics, this outcome was not very surprising. In response, satirists drew attention to the extensive list of Putin's enemies who have died prematurely and under mysterious circumstances, claiming that Putin called Prigozhin's widow to express his condolences before the plane actually crashed. These stories were widely circulated on social media, often passing for serious news. This highlights two potential concerns. First, the blurring of the lines between misinformation and satire is not uncommon, and thus helpful definitions of misinformation should be able to distinguish between satire and other forms of entertainment. Second, even though those who read the story may have understood its satirical intent, they may remember the story itself longer than they remember that it is clearly fictional. Given enough time, a sleeper effect may emerge whereby people remember the message (i.e., Putin called Prigozhin's wife about the plane crash before it took place) but forget its source (i.e., satirical news). If the sleeper effect takes hold (Priester et al., 1999), it is no wonder that untrustworthy or even explicitly satirical sources can become hubs for misinformation.
Lastly, while scrolling through social media, one may encounter a post claiming that President Donald Trump referred to the Italian president Sergio Mattarella as President Mozzarella. If one does not have the time or the curiosity to check the transcripts and videos, would one believe it? According to the motivated reasoning approach, the answer is “it depends.” A Trump supporter would probably discount this as liberal slander, but a Trump opponent would be more likely to believe that the incident did happen. The viral post about Trump's Oval Office gaffe turned out to be completely false, but this does not guarantee that it will be accepted as such by those who do not share his political ideology.
The interplay between misinformation and age‐old human instincts is a remarkable and unpredictable aspect of the evolving information landscape. The early hunter‐gatherers who protected their communities from fierce predators could hardly have foreseen into today's anonymous computer geeks spreading misinformation to disrupt society. Challenging misinformation demands a comprehensive approach encompassing policy, technology, and education. While there is much to be done on the policy and technology front, arguably the greatest challenge is arguably education, as humans must somehow learn to stop giving in to their most natural instincts. Although the bell of misinformation cannot be unrung, the persistence of human instincts should inspire praxis rather than paralysis in the ongoing battle against this millennia‐old enemy.
Baragwanath, E., & de Bakker, M. (2012).
Myth, truth, and narrative in Herodotus
. Oxford University Press.
Bebbington, K., MacLeod, C., Ellison, T. M., & Fay, N. (2017). The sky is falling: Evidence of a negativity bias in the social transmission of information.
Evolution and Human Behavior
,
38
(1), 92–101.
http://dx.doi.org/10.1016/j.evolhumbehav.2016.07.004
Berchier, C. E., Slot, D. E., Haps, S., & Van der Weijden, G. A. (2008). The efficacy of dental floss in addition to a toothbrush on plaque and parameters of gingival inflammation: A systematic review.
International Journal of Dental Hygiene
,
6
(4), 265–279.
http://dx.doi.org/10.1111/j.1601‐5037.2008.00336.x
Bollmer, G. D. (2018).
Theorizing digital cultures
. SAGE.
Craik, K. (1943).
The nature of explanation
. Cambridge University Press.
Dixon, S. J. (2023, August 29).
Annual revenue generated by Meta Platforms from 2009 to 2022, by segment
[infographic]. Statista.
https://www.statista.com/statistics/267031/facebooks‐annual‐revenue‐by‐segment
Eriksson Krutrök, M., & Lindgren, S. (2022). Social media amplification loops and false alarms: Towards a sociotechnical understanding of misinformation during emergencies.
The Communication Review
,
25
(2), 81–95.
http://dx.doi.org/10.1080/10714421.2022.2035165
Frenkel, S., Decker, B., & Alba, D. (2020). How the “Plandemic” movie and its falsehoods spread widely online.
The New York Times
.
https://www.nytimes.com/2020/05/20/technology/plandemic‐movie‐youtube‐facebook‐coronavirus.html
Gorski, D. (2020, May 6).
Judy Mikovits in Plandemic: An antivax conspiracy theorist becomes a COVID‐19 grifter
. Respectful Insolence.
https://www.respectfulinsolence.com/2020/05/06/judy‐mikovits‐pandemic
Greenberg, D. L. (2004). President Bush's false [flashbulb] memory of 9/11/01.
Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition
,
18
(3), 363–370.
http://dx.doi.org/10.1080/10714421.2022.2035165
Howard, P. (2018). How political campaigns weaponize social media bots.
IEEE Spectrum
,
55
(11).
https://spectrum.ieee.org/how‐political‐campaigns‐weaponize‐social‐media‐bots
Johnson‐Laird, P. N. (2013). The mental models perspective. In D. Reisberg (Ed.),
The Oxford handbook of cognitive psychology
(pp. 650–667). Oxford University Press.
http://dx.doi.org/10.1093/oxfordhb/9780195376746.013.0041
Kahan, D. M. (2016). The politically motivated reasoning paradigm, part 1: What politically motivated reasoning is and how to measure it. In R. A. Scott, S. M. Kosslyn, & M. Buchmann (Eds.),
Emerging trends in social & behavioral sciences
(pp. 1–16). John Wiley & Sons.
http://dx.doi.org/10.1002/9781118900772.etrds0417
Kearney, M. D., Chiang, S. C., & Massey, P. M. (2020). The Twitter origins and evolution of the COVID‐19 “Plandemic” conspiracy theory.
Harvard Kennedy School Misinformation Review
,
1
(3).
http://dx.doi.org/10.37016/mr‐2020‐42
Kim, M., & Cao, X. (2016). The impact of exposure to media messages promoting government conspiracy theories on distrust in the government: Evidence from a two‐stage randomized experiment.
International Journal of Communication
,
10
, 20.
https://ijoc.org/index.php/ijoc/article/view/5127/1740
Korman, J., & Khemlani, S. (2020). Explanatory completeness.
Acta Psychologica
,
209
, 103139.
http://dx.doi.org/10.1016/j.actpsy.2020.103139
Kunda, Z. (1990). The case for motivated reasoning.
Psychological Bulletin
,
108
(3), 480–498.
http://dx.doi.org/10.1037/0033‐2909.108.3.480
Lee, J., Kalny, C., Demetriades, S., & Walter, N. (2023). Angry content for angry people: How anger appeals facilitate health misinformation recall on social media.
Media Psychology
, 1–27.
http://dx.doi.org/10.1080/15213269.2023.2269084
Mitchell, A., Gottfried, J., Barthel, M., & Sumida, N. (2018).
Distinguishing between factual and opinion statements in the news
. Pew Research Center.
http://www.journalism.org/2018/06/18/distinguishing‐between‐factual‐and‐opinion‐statements‐in‐the‐news
Nan, X., Thier, K., & Wang, Y. (2023). Health misinformation: What it is, why people believe it, how to counter it.
Annals of the International Communication Association
,
47
(4), 381–410.
http://dx.doi.org/10.1080/23808985.2023.2225489
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions.
Political Behavior
,
32
, 303–330.
https://doi.org/10.1007/s11109‐010‐9112‐2
O'Keefe, D. J. (2018). Persuasion. In O. Hargie (Ed.),
The handbook of communication skills
(pp. 319–335). Routledge.
Oxford Languages. (2016).
Word of the year 2016
. Oxford University Press.
https://languages.oup.com/word‐of‐the‐year/2016