The Reader’s Digest Version:

  • Misinformation is false, misleading or out-of-context content shared without the intent to deceive.
  • Disinformation is purposefully false or misleading content shared with the intent to deceive and cause harm.
  • Be suspicious of information that elicits strong positive or negative emotions, contains extraordinary claims, speaks to your biases or isn’t properly sourced.
  • Before sharing content, make sure the source is reliable, and check whether multiple sources are reporting the same info.

If you’ve been having a hard time separating factual information from fake news, you’re not alone. The proliferation of false information has reached alarming levels in our digital ecosystem.  This makes the misinformation vs. disinformation distinction crucial for anyone who scrolls through social media on the regular.

Concerns about vaccine safety are on the rise. A recent KFF Health Misinformation Tracking Poll revealed that nearly one in five U.S. adults, and a quarter of parents, believe or are unsure about the false claim that “getting the measles vaccine is more dangerous than becoming infected with measles,” despite overwhelming scientific evidence to the contrary. This type of health misinformation takes root easily, which is also true in areas like climate change, election integrity, politics and public health issues.

Misinformation and disinformation are distinct forms of false information, and combating them requires two different approaches, according to University of Washington professor Jevin West, who co-founded and directs the school’s Center for an Informed Public.

As part of the University of Colorado’s 2022 Conference on World Affairs (CWA), he gave a seminar on the topic, noting that if we hope to combat misinformation and disinformation, we have to “treat those as two different beasts.”

For researchers, journalists and policymakers, the distinction is imperative. But for the general public, the key takeaway is simple: Don’t share harmful content—period.

“It’s more important that people don’t spread harmful information than knowing the technical differences,” says Nancy Watzman, strategic advisor at First Draft, a nonpartisan nonprofit tackling false information.

Keep reading to learn about misinformation vs. disinformation and how to identify them. 

Get Reader’s Digest’s Read Up newsletter for more knowledge, humor, travel, tech and fun facts all week long.

What is misinformation?

Misinformation refers to incorrect or misleading information that is spread regardless of intention. It’s essentially any content that doesn’t align with verified facts or reality, whether the sharer knows it’s false or not. When your family members share questionable health claims or unverified political stories on social media, they’re not trying to trick youthey genuinely believe they’re passing along legitimate information. In reality, they’re spreading misinformation.

Examples of misinformation

Misinformation comes in many forms that often seem helpful at first glance but can mislead and spread quickly online. Here are some examples:

  • During the COVID-19 pandemic, social media was flooded with false home remedies and misleading statistics. Many people unknowingly shared these posts, thinking they were useful.
  • Satire or humor can also become misinformation when taken out of context. A joke post can quickly turn into a viral false claim. “Misinformation can be your uncle Bob [saying], ‘I’m passing this along because I saw this,’” Watzman notes.

When misinformation influences people and misleads them into making poor decisions, it can lead to real-life consequences. “People die because of misinformation,” says Watzman. “It could be argued that people have died because of misinformation during the pandemic—for example, by taking a drug that’s not effective or [is] even harmful.” 

Misinformation can be harmful in other, more subtle ways as well. It may steer people toward decisions that conflict with their own best interests. Once someone believes something that isn’t true, it can be tough to change their mind because of how our brains work. We tend to look for information that confirms what we already believe, even if it’s wrong. This makes it exceptionally difficult to undo the effects of misinformation.

What is disinformation?

Disinformation is deliberately created false information spread with the intent to deceive or manipulate others. It’s usually driven by:

  • Political motives: Influencing elections and shaping public opinion.
  • Financial gain: Producing clickbait articles and scam websites.
  • A desire for chaos and confusion: Sowing division and exploiting societal tensions.

This kind of fake information often stirs up strong feelings, like anger or outrage. It can push people to hold extreme beliefs—even believe in conspiracy theories—making it hard to find common ground. As a result, people start to distrust the news media and other reliable sources. When that happens, West says, people turn to less trustworthy places online for information.

“Democracy thrives when people are informed. If they’re misinformed, it can lead to problems,” says Watzman. In some cases, those problems can include violence.

Examples of disinformation

Many Americans first became aware of systematic disinformation during the 2016 presidential election, when Russia launched a massive disinformation campaign to influence the outcome. However, the phenomenon has been around for centuries.

In fact, writer and tech consultant Eliot Peper, another panelist at the CWA conference, compared historical disinformation to modern propaganda. He highlighted how 10th-century Spanish feudal lords commissioned poetry—essentially, “the Twitter of the time”—with verses that both praised themselves and “threw shade on their neighbors.” Lords paid messengers to spread deceptive verses, accusing opponents of adultery or treachery. “Even by modern standards, a lot of these poems were really outrageous, and some led to outright war,” he said.

History repeats itself—only the medium has changed. Today, disinformation is as much a weapon of war as physical combat. In the Ukraine-Russia war, disinformation is particularly widespread. In an attempt to cast doubt on Ukrainian losses, Russia circulated a video claiming Ukrainian casualties were fake news and that the bodies were just a bunch of mannequins dressed up as corpses. Of course, the video originated on a Russian TV set. And that claim was the fake news.

As the war rages on, new and frightening techniques are being developed, such as the rise of fake fact-checkers. In Russia, “fact-checkers” were reporting and debunking videos supposedly going viral in Ukraine. The catch? The videos never circulated in Ukraine. The fact-checking itself was just another disinformation campaign to sow confusion.

“We could check. We could see, no, they weren’t [going viral in Ukraine],” West said. “They were actually fabricating stories to be fact-checked just to sow distrust about what anyone was seeing.”

Beyond war and politics, disinformation can look like phone scams and text scams—anything aimed at consumers with the intent to harm, says Watzman. “You’re deliberately misleading someone for a particular reason,” she says.

Misinformation vs. disinformation: The difference

venn diagram explaining the difference between misinformation, disinformation, and malinformationReader's Digest

The key difference between misinformation and disinformation is intent.

  • Misinformation is false information shared without any harmful intent.
  • Disinformation is false information spread to mislead, manipulate or cause harm.

The consequences of spreading false information

Usually, misinformation is just considered free speech, even if it’s wrong. But disinformation can be more serious. If it includes lies that damage someone’s reputation (slander) or hateful speech targeting a group of people, it’s not protected under the First Amendment. That means people could face legal action for spreading that kind of disinformation.

How disinformation spreads faster and further

Another difference between misinformation and disinformation is how widespread the information becomes. Misinformation tends to be more isolated. “Disinformation has multiple stakeholders involved; it’s coordinated, and it’s hard to track,” West said in his seminar. He cited an example: The Plandemic video, which spread conspiracy theories about COVID-19, wasn’t just misinformation. It was a coordinated disinformation campaign designed to deceive, manipulate public opinion and erode trust in health institutions. “It was taken down, but that was a coordinated action,” he says.

Why false information spreads so quickly

While disinformation may be designed to spread further and faster, both it and misinformation share a common trait: Once they’re out in the world, they’re incredibly difficult to contain. The rise of encrypted messaging apps like WhatsApp makes tracking the spread of misinformation and disinformation difficult. And, of course, the internet allows people to share things quickly. “The virality is truly shocking,” Watzman adds.

The viral nature of the internet, paired with growing misinformation, is one of the reasons more and more people are choosing to stay away from social media platforms. 

What is a deepfake?

If you’ve spent time on TikTok or Instagram, you’ve probably come across videos of Tom Cruise performing magic tricks, dancing or casually chatting. The problem? It’s not actually him.

These are deepfake videos, created using deep learning—a type of artificial intelligence that generates highly realistic fake videos or audio clips. They may look real (as those videos of Tom Cruise do), but they’re completely fake.

Deepfakes have been used to insert celebrities into explicit content without their consent, fabricate political speeches and impersonate people for scams. As technology advances, deepfakes are no longer just a concern for the rich and famous. They are being weaponized for revenge porn, financial fraud and large-scale disinformation campaigns. 

What really has governments and security experts worried is their role in manipulating public perception and the risk deepfakes pose to democracy. For example, ahead of the 2024 election, a fake robocall using AI to mimic President Joe Biden’s voice told people not to vote. While that incident aimed to suppress votes, the potential dangers are even more apparent in wartime situations. There’s been a lot of disinformation related to the Ukraine-Russia war, but none has been quite as chilling as the deepfake video of Ukrainian President Volodymyr Zelensky urging his people to lay down their weapons.

Though quickly debunked, it demonstrated how deepfakes could be used to manipulate war narratives, spread propaganda and erode trust in legitimate news sources. As AI-generated content becomes more convincing, detecting truth from deception will only become more difficult, posing serious risks to democracy and national security.

What is malinformation?

Like disinformation, malinformation is shared with the intent to cause harm, but the key difference is that malinformation is based on real facts that are manipulated or used maliciously.

Examples of malinformation include leaked private emails or documents used to discredit individuals, doxxing (publishing private information to harass or intimidate someone) and revenge porn. Selective leaks are also a form of malinformation, where true information is deliberately misrepresented to mislead the public. In political contexts, malinformation can be used to undermine trust in institutions, manipulate elections and create fear or outrage.

How to recognize misinformation and disinformation

So you understand misinformation vs. disinformation, but can you spot these phonies in your everyday life? There are a few things to keep in mind.

For starters, misinformation often contains a kernel of truth, says Watzman. It also often contains highly emotional content. “If something is making you feel anger, sadness, excitement or any big emotion, stop and wait before you share,” she advises. “The stuff that really gets us emotional is much more likely to contain misinformation.”

She also recommends employing a healthy dose of skepticism any time you see an image. “Images can be doctored,” she says. “We see it in almost every military conflict, where people recycle images from old conflicts.” To determine if an image is misleading, you might try a reverse image search on Google to see where else it has appeared.

Both Watzman and West recommend adhering to the adage “consider the source.” Before sharing something, make sure the source is reliable. It’s a good idea to see if multiple sources are reporting the same information; if not, your source may not be trustworthy. When in doubt, don’t share it.

If you must share something—even to debunk it—take a screenshot instead of resharing. Resharing content, even to criticize it, boosts engagement metrics and helps misinformation spread further through social media algorithms.

West also warns against blindly trusting statistics and infographics, as data can be manipulated to mislead audiences. “Misinformation spreads even faster when it looks official,” he explains. “If you present misleading statistics in a polished graph or infographic, people tend to believe it—even if it’s completely false.” 

He noted that false data has been used by governments, businesses and research institutions to shape policies simply because people aren’t taught how to critically analyze quantitative information.

In the end, West emphasizes, “extraordinary claims require extraordinary evidence.”

To protect yourself from misinformation and disinformation, learn how to verify digital content before engaging with it.

How to develop media-literacy skills

Media literacy—the ability to access, analyze, evaluate and create media in various forms—provides our strongest defense against both misinformation and disinformation. This skill set helps you question what you read, hear and see rather than passively consume content.

To stay safe from misinformation, get in the habit of questioning every source you see online. Consider factors such as the publisher’s reputation, the author’s expertise, the evidence presented and whether the content distinguishes between facts and opinions.

Each of us bears responsibility for maintaining the integrity of our shared information ecosystem. Before sharing content, verify its accuracy, consider its potential impact and ensure it comes from reliable sources.

FAQs

How quickly does false information spread compared with accurate information?

Researchers have found that false information spreads up to six times faster than accurate information on social media platforms, particularly when it contains emotionally charged content. We’re more likely to notice—and share—claims that are new or surprising, and social media algorithms are designed to show us things that grab our attention. This helps misinformation go viral.

Can artificial intelligence help combat misinformation?

AI tools are increasingly being deployed to detect and flag potential misinformation, but they’re a double-edged sword. While AI can scan vast amounts of content to identify patterns consistent with false information, the same technology powers the creation of increasingly sophisticated deepfakes. 

How do I talk to friends or family who regularly share misinformation?

Approaching loved ones about their sharing of false information requires empathy and patience. Rather than dismissively labeling their content as “fake news,” which can trigger defensiveness, try asking nonconfrontational questions about their sources. Share reliable information from sources they might trust and focus on the shared goal of wanting accurate information. 

About the experts

  • Jevin West, PhD, is a professor at the University of Washington and co-founder of the Center for an Informed Public. With more than a decade of research experience, he specializes in studying how false information spreads through digital networks and its impacts on society.
  • Nancy Watzman serves as a strategic advisor at First Draft, a nonpartisan, nonprofit coalition that works to protect communities from harmful misinformation. Her expertise spans media analysis, digital literacy and developing practical strategies for combating false information.

Sources:

  • Jevin West, associate professor at the University of Washington and co-founder and director of the UW Center for an Informed Public
  • Nancy Watzman, strategic advisor for First Draft
  • Eliot Peper, writer and tech consultant
  • Conference on World Affairs: “Calling Bull—: Telling Truth from Fiction in the Information Age”
  • Science: “The spread of true and false news online”
  • KFF: “Vaccine Monitor: Media and Misinformation”
  • AP: “New Hampshire investigating fake Biden robocall meant to discourage voters ahead of primary”