Algorithm Proof: Our Guide to Navigating Midterm Misinformation & More
In an era where “news” is constantly showing up while we’re scrolling through our socials, whether or not we asked for it, it is important to be able to recognize when the information we see isn't quite what it seems. "Information disorder" is a term that covers everything from honest (albeit careless) mistakes to intentional attacks. It’s more than just "fake news," it has the potential to cause real harm to real people, especially as we approach the 2026 midterms.
More than half of Americans get their news from social media (Reuters 2025), mostly passively (meaning, without searching for it) (Pew Research Center 2025a). More concerning? More than a third trust what they see is true without question (Pew Research Center 2025b), while less than half who share information consistently take the time to verify it’s true (Security.org 2025).
Why This Matters
Information disorder isn't just annoying; it’s a tool used to undermine our power. It can lead to:
- Voter suppression, for instance, by posting the wrong dates or locations to keep people from the polls
- Candidate smearing, including by posting fake quotes and images, audio, or video edited or taken out of context to ruin reputations
- Issue division, for instance, by using hate speech and promoting fear to break up our communities
What It Looks Like: The Three Faces of Information Disorder
There are three common types of false and misleading information. Understanding these differences is important for determining how to address a situation.
Misinformation
Misinformation is essentially false information that is shared accidentally. This is typically a mistake that happens when a person fails to confirm the information is accurate before reposting or resharing. However, while it may be accidental, this type of carelessness is still dangerous.
Disinformation
Disinformation is false information that has been deliberately created or shared to cause harm or to deceive people. Disinformation includes creating and sharing deep-fakes (AI-generated audio, imagery, or video that depict a person saying or doing something they never did) as well as simply fabricating stories.
Malinformation
Malinformation is information that is factually true, but shared with the purpose of causing harm. Examples of malinformation include sharing a real photo without proper context or with an untrue caption or story, sharing private images or messages (without consent or context), and publicly exposing personally identifiable information or sensitive information with the intention of embarrassing, exploiting, or inciting harm (also known as “doxing” or “doxxing”).
How to Spot It: A Checklist for Identifying Information Disorder
The first step to addressing information disorder is understanding how to identify what’s legit and what isn’t. Here are five ways to check if what you’re seeing is true:
Vibe CheckIf a post makes you immediately furious, frightened, or even smug (“I knew it!), it may be intentional–designed to make you want to share without thinking.
Source Check
Is the source credible? Is the source legit or a dupe (check the URL and “About” section)? Are other posts from the source balanced or biased? Who funds the source? Check the author and outlet.
Confirmation Check
Are other reliable sources sharing the same info? If it’s true, more than one reputable source is likely sharing the same information.
Evidence Check
Is the post relying on evidence or opinion? Is the evidence (image, video, quote, etc.) current? Has the evidence been edited (clipped, cropped, etc.) or was it AI-generated? There are lots of online tools to help check these.
Too-Good-to-Be-True Check
Does the post rely on a villains vs. heroes narrative? Does it provide overly simple explanations for complex issues? Does it claim to confirm exactly what one group already believes? If it sounds too good to be true, it might be.
What You Can Do About It
Identifying information disorder is only the first step–knowing how to respond is what makes you truly algorithm proof. If you see something suspicious, here are a few ways you can deal with information disorder. Whether it's disinformation, misinformation, or malinformation–you have options.
Ignore
Ignoring may be your best approach for dealing with disinformation, including posts shared by bots or trolls that are designed to cause people to react emotionally. When you try to correct the information–by commenting or reposting with the truth–the platform will think the original post is popular and the algorithm will show the post to more people.
Correct Privately
When you have a personal correction with the individual posting misinformation (a family member, friend, or peer), correcting them in private may be most effective. People are more likely to consider your stance if you approach them directly than if they feel called out or publicly shamed. Public corrections can make them feel attacked and cause them to become defensive.
Correct Publicly
When misinformation or disinformation has been widely shared, a public correction may be appropriate. In this case, you may be able to reach a wider audience by sharing the truth (and providing evidence, such as original sources). You aren’t likely to change the mind of a bot or troll, but you may be able to reach others who are seeing the thread and assuming the contents are true.
Report
In some cases of disinformation and in most cases of malinformation, reporting will be your best option. Posts designed to cause harm, including most cases of malinformation and some cases of disinformation, violate most platforms’ Terms of Service, and the platform is responsible for investigating these cases–hold them accountable!
Powerful Corrections: How to Effectively Advocate for the Truth
If you decide to correct someone–either privately or publicly–make it worth your effort. Here's how you can advocate for the truth with grace. Below are a few tips on what to do (and not do) when correcting information disinformation.
- Emphasize the truth. Start with the truth, reference the lie, then restate the truth.
- Correct friends and family privately to reduce the likelihood that they become defensive.
- Share reputable, nonpartisan sources as evidence for your case.
- Lead with grace. Empathy-based counter-speech has proved effective in reducing hateful rhetoric.
- Use screenshots to show the original content, rather than resharing the post.
- Don’t repeat the lie up front. This can make more people believe it.
- Don’t argue with bots or trolls–you won’t win.
- Don’t forget to verify any information you share to support your argument.
- Don’t respond with anger or insults–you may lose credibility with the wider audience.
- Don’t reshare the original post. This tells the algorithm it is popular and to increase visibility.
__________________________________
Become part of the solution. Become Algorithm Proof.
For more resources on how to mobilize your peers this election season, visit ignitenational.org/vote.
