Translate page with Google

Story Publication logo October 31, 2024

A Field’s Dilemmas: Misinformation Research Has Exploded. But Scientists Are Still Grappling With Fundamental Challenges

Author:
illustration showing social media misinformation
English

The challenges researchers grapple with as they try to understand how misinformation spreads.

SECTIONS

In the summer of 2009, Adam Berinsky had just published a book on U.S. attitudes to war and was ready to focus on something new. The Massachusetts Institute of Technology political scientist watched as debates on the Affordable Care Act, President Barack Obama’s health care legislation, devolved into talks of “death panels.” Meanwhile, his mother-in-law was sending him emails questioning whether Obama was born in the United States. “I felt like, ‘This is crazy. What is going on?!’” Berinsky says. It might be a “fun fringe thing,” he thought, to study this type of misinformation, which few people were researching at the time. “I’ll just spend a couple of years figuring out why do people believe this, then a couple of years developing strategies, and then move on,” he recalls thinking.

Fifteen years later, Berinsky is still working on the topic—and he’s not alone. The number of research papers on misinformation has exploded in recent years, as psychologists, philosophers, and political and social scientists flood the nascent field to figure out how misinformation spreads, and what can be done about it. Like Berinsky, many have been spurred on after seeing how misinformation has contributed to political polarization and undermined trust in democratic institutions around the world, as well as threatened people’s health during the global pandemic.

And yet, despite the influx of money and scientists, Berinsky’s goal of figuring out solutions and moving on—or even clearly defining the problem—seems remote. “2009 Adam Berinsky was incredibly naïve,” says 2024 Adam Berinsky. The issue has turned out to be much more complex and nuanced than “a few crazy stories,” he says; instead, researchers are studying an entire information ecosystem—one in constant flux.


As a nonprofit journalism organization, we depend on your support to fund more than 170 reporting projects every year on critical global and local issues. Donate any amount today to become a Pulitzer Center Champion and receive exclusive benefits!


Amid a U.S. election season that has seen the rampant spread of misinformation, Science is taking a look at five of the biggest challenges the field faces in that search for answers.

1. What is misinformation, actually?

In January 2021, the South Florida Sun Sentinel published a story about Gregory Michael, a doctor who had received a shot of Pfizer’s COVID-19 vaccine just before the end of 2020 and then died suddenly on 3 January 2021. The story was republished by the Chicago Tribune under the headline “A ‘healthy’ doctor died two weeks after getting a COVID-19 vaccine; CDC is investigating why” and quickly spread online. On Facebook it was one of the most shared articles of that year, seen by more than 54 million people in the U.S.

The episode encapsulates a crucial dilemma for researchers: What exactly counts as misinformation? A “consensus statement” by the American Psychological Association on health misinformation highlights the Tribune article as a prime example. Although the story was factual and made it clear that authorities had not established a link between the vaccine and Michael’s death, the headline—all that many social media users would see—“used a framing technique that raised concern,” the report notes.

But other researchers think such criteria are overly broad. “I don’t think it’s useful to call [the Tribune story] misinformation,” says Brendan Nyhan, a political scientist at Dartmouth College. “I don’t like the way people want to sweep up everything into this term.”

The definition matters because it is the first step in determining how pervasive misinformation is and how much impact it has. A 2020 paper in Science Advances, for example, found that clearly false news masquerading as the real thing (“Pope endorses Donald Trump,” say) only made up 0.15% of the daily media diet of people in the U.S. “If you define misinformation in that way, as fake news, as fabricated news content, then you don’t really see it very much,” says Jon Roozenbeek, a researcher at King’s College London.


Defining misinformation

A 2023 survey of 150 misinformation experts found that there is often no consensus about which kinds of content count as misinformation.


Data: S. Altay et al., Harvard Kennedy School Misinformation Review, 4, 2020-119 (2023). Image by M. Hersher/Science.

A broader definition would be “any information that is false,” rather than only fake content that mimics news stories. A committee convened by the National Academies of Sciences, Engineering, and Medicine that is currently working on a report on misinformation in science adopted a variant of that as an early working definition: information that counters the consensus in science. That phrasing raised two difficult questions, acknowledges Kasisomayajula Viswanath, a researcher at the Harvard T. H. Chan School of Public Health who chairs the committee: When is something a consensus? And when is it legitimate to dissent? After all, the consensus can turn out to be wrong, too,” Viswanath says. “You want to be very thoughtful and careful of labeling something as misinformation.” (The committee’s definition has since evolved, he says.)

Besides, many researchers say falsehood is too limited a criterion. “There’s plenty of examples, where things are true, but they are completely misleading, which is a form of misinformation as well,” says Jevin West, a computational social scientist at the University of Washington (UW). That includes so-called “clickbait” headlines like that on the Tribune story: In a 2023 survey of 150 misinformation experts, more than half the respondents agreed or tended to agree that these count as misinformation (see graphic, above).

Such misleading but not outright false content appears to have the farthest reach. A paper published this year in Science found that users on Facebook saw far more clickbait headlines about COVID-19 vaccines from reputable outlets like the Chicago Tribune than blatantly false stories that had been flagged by fact checkers—in part because the platform curbed the spread of the latter (see graphic, below). “While these [misleading] stories had a smaller impact individually on a participant’s vaccine intentions than some outright fake news, they had a much larger effect overall because they were seen by many more users,” says Jennifer Allen, a researcher at the University of Pennsylvania and one of the authors of the paper. “People actually have to see the content for it to have a big effect.”


Gauging impact

The five most read vaccine-related articles on Facebook in the first 3 months of 2021 included some stories from reputable outlets with misleading headlines. These were viewed by far more people than all vaccine-related content flagged as misinformation, suggesting they may have a bigger impact on vaccine attitudes than outright fake news.


Data: J. Allen et al., Science, 384, adk3451 (2024). Image by N. Cary/Science.

Kate Starbird, a researcher at UW who focuses on misinformation around elections (see related story), sees a more fundamental problem with current definitions: They tend to regard misinformation as individual, atomic units of information, such as an article or a tweet. But that overlooks the bigger picture of disinformation, misinformation that is spread deliberately to mislead others. Disinformation campaigns often work by selectively amplifying certain pieces of news, all or most of which can be true, Starbird says. “Disinformation is not a piece of content. It’s a strategy.”

The field is making headway on these issues, says Rebekah Tromble, a political scientist at George Washington University. Some researchers are moving away from using the term misinformation entirely, instead talking about “rumors” or “conspiracy theories,” which allow for more nuanced research into specific kinds of misinformation. “The problem is not that we need to have a shared definition of what misinformation is,” says Gordon Pennycook, a psychologist studying misinformation at Cornell University. “The problem is that people are assuming that they have a shared definition, and they’re just using different definitions.”

2. Everything is political

When media outlets reported in June that Stanford University’s Internet Observatory was winding down, conservative voices celebrated. “Free speech wins again,” Representative Jim Jordan (OH) crowed on X (formerly Twitter). Jordan, like some other Republicans, has charged that misinformation researchers are biased against the right, and as chair of the House of Representatives Judiciary Committee, he has launched an investigation into whether researchers have colluded with companies and government agencies to silence conservative voices. The Internet Observatory—a cross-disciplinary group studying misinformation—is one of his favorite targets.

Yet one reason researchers focus on conservative circles is that study after study has concluded that in the U.S. misinformation circulates more widely on the right of the political spectrum. For instance, a study conducted during the 2020 presidential election in collaboration with Meta—the company that owns Facebook—found that the large majority of content rated false by Meta’s fact checking program was seen by conservative audiences (see graphic, below). “There is a clear asymmetry,” says Sandra Gonzalez-Bailon, a social scientist at the University of Pennsylvania who led the study.

That is not to say that conservatives are per se more susceptible to misinformation than liberals. In one experiment New York University psychologist Jay Van Bavel showed made-up stories about Democrats or Republicans engaging in corruption and other negative behaviors to Democrat and Republican participants. Both sides were equally likely to believe negative fake news about the other, he found. “We’re all gullible,” Van Bavel says.

It’s possible that Republicans are more likely to share a given piece of misinformation they come across, or there simply may be more of it being produced on the right in the first place. Either way, the rightward skew of misinformation creates a problem for researchers, says Lisa Fazio, a psychologist at Vanderbilt University, because they can appear politically motivated. “You look like you’re being harder on the right than the left,” she says.


Skewing right

Researchers investigated Facebook posts around the time of the 2020 U.S. elections that linked to news stories flagged as false by fact checkers. They found that 97% of these posts were seen and engaged with mostly by conservatives.


Data: S. González-Bailón et al., Science, 381, ade7138 (2023); Image by M. Hersher/Science.

It doesn’t help that the researchers themselves—like academics generally—skew left. The result: court cases, hearings, and attacks on the field, as well as online abuse including death threats, often aimed at the most vulnerable researchers. “There’s just absolutely no doubt that women and those with marginalized identities are being targeted in these ways far more than anyone else,” Tromble says. Gonzalez-Bailon says she has not yet been personally attacked. “But I am bracing myself,” she says. “It could happen any day.”

Even the word “misinformation” has become so politicized that it can be risky to use it, Tromble adds. And the squishy definition of misinformation puts the researchers themselves in the challenging position of having to decide whether a claim is misleading, which can add to accusations of political bias.

For now, the Stanford Internet Observatory continues to operate, though several of the researchers have left and it is no longer participating in rapid research on misinformation around elections. Although there were other factors at play, too, Stanford ultimately decided it was less trouble to stop this kind of research, says Renée DiResta, who was the research manager at the observatory. “The decision to shut down worthwhile research in response to a baseless and retaliatory congressional investigation is a capitulation with profound consequences,” she warns. “It sends the message that these attacks work.”

3. The harms are hard to pin down

Iran was hit early and hard by SARS-CoV-2. But as the virus spread through the country in spring 2020, a second epidemic was growing in its shadow: a spike in methanol poisonings. By May 2020, almost 6000 people had been sickened and some 800 had died.

News stories suggested the poisonings were a tragic consequence of rumors circulating in Iran that alcohol could disinfect internal organs and protect against COVID-19. The World Health Organization highlighted them as a clear case of the real-world harms that misinformation can cause.

The real picture is much more complex, however. Spikes in methanol poisonings are common in Iran, where ordinary alcohol can’t be sold or consumed and bootleggers often use industrial methanol to produce homemade alcoholic drinks instead. And subsequent research has suggested most of those who were poisoned had been drinking this bootleg alcohol for pleasure or escape, not protection from COVID-19.

So although misinformation likely played some role in the surge in deaths in 2020, other factors also contributed. Bootleg alcohol may have become more available during the pandemic, for instance, and people quarantined at home may have drunk more to cope with the stress and fear of that uncertain time.

Too often people blame social problems on misinformation with little evidence, Nyhan says. “When I first started studying this, no one talked about misinformation and so I ran around being like, ‘Hey, maybe we should think about this, it is an important problem,’” he says. But more recently, he says, he has urged colleagues: “Stop making these unsupported claims about how misinformation is a cause of all the world’s problems.”

Firmly linking misinformation to real-world consequences is harder than it may seem at first. Fazio has grappled with that challenge while studying the illusory truth effect: how repeating something that is false will make a person more likely to believe it. “We kind of learn early on in childhood that, in general, things that we’ve heard more often are likely to be true,” she says. For better or worse, we end up using a feeling of familiarity as a cue for truthfulness. But showing how this shift in beliefs affects people’s behavior is hard, Fazio says, in part because it would be unethical to expose study participants to misinformation and then let them act on it in the real world.

Indeed, most research dodges the question. A review of 759 misinformation studies published late last year found they mostly measured changes in self-reported attitudes or beliefs. Less than 1% looked at how participants later behaved.

Most research also tends to present misinformation in a highly artificial and simplified way. “It’s roughly: ‘Here’s a bunch of headlines and I’m going to show you [them] in sequence. You tell me for each of them if you believe them or not,’” Roozenbeek says. “That’s so far removed from how actual information consumption and belief shaping works, that it’s like they are two different worlds and so inferring general conclusions is too hard.”

"We kind of learn early on in childhood that, in general, things that we’ve heard more often are likely to be true."

Lisa Fazio, Vanderbilt University

Some researchers have taken a different approach, testing whether correcting misbeliefs will change people’s behavior. In general, it doesn’t, says Thomas Wood, a political scientist at Ohio State University who has done several of these studies. “We can alleviate misinformation belief quite effectively—by social science standards—but, gosh, it doesn’t change how you vote, doesn’t change how you get vaccinated, doesn’t change how you behave when you're with your doctor,” he says.

Still, our beliefs clearly influence our actions, Fazio says, even if there are other important factors, too. For some particularly preposterous beliefs the link is undeniable, says Stephan Lewandowsky, who researches misinformation at the University of Bristol. He points to the “Pizzagate” scandal, when a 28-year-old man showed up with a semiautomatic rifle at a Washington, D.C., pizza parlor and fired three shots, intending to rescue children he thought were being held there. This belief was based on a conspiracy theory that Democrats ordering pizza there were actually ordering children who were being held in the basement. (The restaurant has no basement.) “You might think: Who the f--- would believe that?” Lewandowsky says. “Well, one guy with a gun did.”

Even if it is harder to discern a link between belief and action in other cases, Lewandowsky says, it may still exist. He compares the situation to the way climate change contributes to natural disasters: Warming may not directly cause any individual heat wave or flood, but it makes such events more likely. Similarly, misinformation can gradually erode trust in public health agencies and other institutions, making people less likely to follow their recommendations.

The Pizzagate example shows another complication, Nyhan says. Millions of people were exposed to the conspiracy, but only one person showed up with a gun at the restaurant. That kind of signal may be lost when researchers are looking for the average effect of misinformation in large data sets of thousands or millions of people. “If you say stuff like this in front of a million people, one of them may cross that line,” he says. “The harms are very likely to be in the tails, in the set of folks who are consuming tons and tons of this misinformation.”

Some researchers are finding ways to estimate how changes in belief might translate to real-world behavior, even if they can’t measure this directly. In her study on the most popular vaccine-related stories on Facebook (see graphic, above), Allen found that reading misleading headlines like that of the Chicago Tribune article reduced people’s intentions to get a COVID-19 vaccine. And because there are already good data on how such intentions translate into actual behavior, the team was able to estimate the overall impact of those posts. Their conclusion: If Facebook users had not seen these articles, 3 million more people in the U.S. may have received the vaccine.

But for now, the field is in a weird place, many researchers admit: trying to prove something that seems obviously true. “The situation is almost embarrassing,” says Philipp Lorenz-Spreen of the Dresden University of Technology. “Everyone has some story, an uncle that got radicalized on WhatsApp or whatever,” he says. “But proving anything definitively is a different job.”

4. Companies own the data

Elections are hotbeds of misinformation—and social media platforms have traditionally provided researchers with an easy way to investigate who is spreading it and how far it travels. So in 2019, Francesco Pierri, a researcher at the Polytechnic University of Milan, looked at Twitter in the run-up to the EU parliament elections to study how misinformation was circulating in Italy. He found it was being spread mostly by far-right users, and was largely concentrated on topics such as immigration.

In late 2023, with this year’s European elections looming, Pierri wanted to conduct a similar analysis across several European countries. But this time he ran into trouble.

Previously, scientists could access a treasure trove of data shared through Twitter’s application programming interface, enabling researchers like Pierri to routinely collect millions of tweets a day for their studies. Twitter’s easy access made it a kind of model organism for social media research. But in early 2023, a few months after Elon Musk took over the company, it shut off free access, instead charging scientists tens of thousands of dollars per month for much more limited data. “It was a blow to my research,” Pierri says.

It’s not just X. Access to data has become more difficult across the board, says Filippo Menczer, a computer scientist at Indiana University. Meta this year discontinued a tool called Crowdtangle, which had allowed researchers free access to at least some data, replacing it with the Meta content library. The data in this library are limited, and accessing them “very, very, very difficult,” Menczer says. The same is true of other platforms including YouTube and TikTok, he says. “When you go and look at the details, they put so many limitations that it makes most of the research completely impossible.”

Some academic researchers have gotten around these problems by collaborating with industry, but that can bring other complications. In one recent example, 17 independent scientists worked with Meta on a set of studies during the 2020 U.S. presidential election. In one of these studies, scientists were able to manipulate the feeds of 20,000 Facebook and Instagram users, in an effort to reduce the amount of misinformation they saw. One goal was to determine whether users who saw these altered feeds were less polarized on issues such as immigration or health care than those who continued to see content based on Facebook’s standard algorithm.

However, other scientists have recently criticized Meta for not clarifying that it had also tweaked its default algorithm during the experiment, potentially undermining the conclusion of the studies: that Facebook’s regular algorithm does not promote polarization. It’s also unclear whether the company provided enough information for the outside researchers to design the most effective experiments, says Frances Haugen, a former Facebook employee–turned–whistleblower.

The data for these studies is available through the University of Michigan, but getting access to it takes time and effort, a hurdle for researchers who want to replicate the election studies. Menczer says he has been negotiating with Meta for months to get access to another of the company’s data sets. “They want [the university] to sign a nondisclosure agreement just to see the terms of the nondisclosure agreement,” he says. “It’s crazy.”

"They put so many limitations that it makes most of the research completely impossible."

Filippo Menczer, Indiana University

Collaborations with a tech giant also tend to move at a glacial pace. With this year’s U.S. election season over, not even half the Meta papers from the previous election have been published. “There were so many times in this project where the outside academics would say, ‘Oh, we should really get this data’ and then the answer was, ‘Well, that’s a 6-month process,’ or ‘We’ve got to find four people who have time to do that work and escalate it up the chain to see if it’s even OK,’” says Michael Wagner, a social scientist at the University of Wisconsin–Madison who acted as an independent observer for the work. Future collaborations should start with smaller projects that can be finished more quickly, and include a clear agreement with the company about how much time and effort it will invest, he says.

There is a larger danger: Because social media platforms control most of the data and fund a lot of the studies, they have been able to influence the direction of the field, says Joe Bak-Coleman, a researcher affiliated with the University of Konstanz. He thinks this influence helps explain why misinformation research has largely focused on interventions that target individual users, such as nudges encouraging people to check the veracity of a post before sharing it. Far fewer researchers study issues touching the companies, such as algorithms or the design of platforms.

For researchers in the European Union, a recent law may help. The Digital Services Act, a regulation that came into force in November 2022, obliges platforms to provide access to researchers for certain projects. In February, Pierri was granted access through this route, but he says the data are limited. “It’s like a few million tweets for all European countries, which is really nothing,” he says—too little for him to finish his research.

What’s more, the EU law only guarantees access to scientists studying “systemic risks” for the EU, and how well it will be enforced remains to be seen. Still, it is already drawing researchers to the bloc: Przemyslaw Grabowicz, a computer scientist at the University of Massachusetts Amherst (UMass), says it was a key reason he recently accepted a position at University College Dublin.

Many researchers are finding alternative ways to get the data they need. Some have started to use computer programs to scrape large amounts of public data from platforms such as X, though there are questions about when that is legal and ethical. Others are doing surveys, asking users to donate their own social media data, or even building their own social media replicas in which artificial intelligence systems interact like human users.

But some question the field’s overreliance on social media data in the first place. After all, podcasts, radio, TV, or day-to-day conversations with friends and neighbors also play a huge role in the spread of misinformation. But data are hard to gather for those channels, West says. “They don’t receive enough attention because of the convenience issue.”

Even on social media, researchers are having to adapt as apps like WhatsApp or Telegram become ever more influential in spreading misinformation. Because most of the content on these platforms is not public, researchers have to find new ways to study them, says Patricia Rossini, a political scientist at the University of Glasgow, just as they must do for older platforms. “You’re not going to push a button and download a million tweets,” Rossini says. “But you can’t even do that on Twitter anymore, so it’s fine.”

5. The problem is global, the research isn’t

In 2016, before the United Kingdom voted for Brexit and the U.S. elected Donald Trump as president, another election heralded what some have called the “posttruth world.” In May that year the Philippines elected Rodrigo Duterte as president, a win many credited in part to an online campaign full of disinformation including fake news announcing that Duterte had been endorsed by then–German Chancellor Angela Merkel, the pope, and even NASA.

But there has been much less research on misinformation in the Philippines than the U.S. or the U.K., even though the country has embraced digital technologies like few others. “We’re the top Facebook users around the world, spend the longest hours on social media, we’re the texting capital of the world,” says Jonathan Ong, a misinformation researcher at UMass who is from the Philippines. The country is also home to a large portion of the world’s content moderators, sometimes called “the janitors of the world.”

It’s not just the Philippines. Last year’s review of misinformation papers found that participants in half the studies were from the U.S. and almost one-third were from Europe, whereas East Asia, Africa, or the Middle East each accounted for about 5%. “What we know about misinformation is really only capturing a very small slice of the global population and many of our assumptions may not translate across cultures,” says co-author Gillian Murphy, a psychologist at University College Cork.

The same deficit applies to what we know about how to counter misinformation. In a 2023 review for the U.S. Agency for International Development, Nyhan and colleagues found that of 155 studies to examine the effectiveness of these strategies, 80% were conducted in the Global North. “This severe imbalance in evidence quantity highlights the challenges of drawing conclusions about effective strategies for countering misinformation in the Global South,” they write.

Yet misinformation may be even more prolific in non–English-speaking countries, in part because most content moderation is focused on English posts. It circulates not just on the major networks, but also on smaller, little-studied social media platforms, Ong says. KakaoTalk, for example, is the most widely used instant messaging app in South Korea, and Viber is a major player in the Philippines.

Ultimately, the field’s focus on the U.S. in particular, with its unusual two-party system and politically charged environment, means even solid findings may have limited applicability elsewhere in the world. “The problem with research on misinformation is that if most of it is coming from the U.S., then it’s all about Republicans and Democrats,” says Rossini, who is from Brazil and does some of her work there. “What we’re missing is understanding dynamics that go beyond a two-party system that is very unique.”

RELATED TOPICS

a pink halftone illustration of a woman speaking a microphone while raising a fist

Topic

Democracy and Authoritarianism

Democracy and Authoritarianism
orange halftone illustration of three newspapers stacked on each other

Topic

Misinformation and Disinformation

Misinformation and Disinformation
technology and society

Topic

Technology and Society

Technology and Society

Support our work

Your support ensures great journalism and education on underreported and systemic global issues