Seeing Through the Noise
Before I teach you anything in this chapter, I want you to do something.
Open whatever social media platform you use most. Scroll until you find a claim that makes you feel something — anger, fear, vindication, hope. Something shared by someone you follow. Something that feels true.
Now stop. Don’t share it. Don’t react to it. Just sit with it and write down exactly what it claims, who originally made the claim, and what evidence is presented.
I’ll come back to this.
Every skill I’ve covered so far has been about your data, your devices, your digital footprint. This chapter is different. This one is about your mind.
Information integrity is a core survival skill — not because misinformation is new, but because the tools for producing and distributing it are now cheaper, faster, and more convincing than at any point in history. A voice can be cloned from a few seconds of audio. AI-generated text is increasingly indistinguishable from human writing. Video can be fabricated with consumer-grade tools. And the same data broker ecosystem I described in Chapter 2 — the one that profiles and sells your attention — feeds you content calibrated to keep you engaged, not informed.
A 2018 MIT study examining over 126,000 stories spread on Twitter found that falsehoods were 70% more likely to be retweeted than true stories, and reached their first 1,500 people six times faster. A 2024 study published in Science found that misinformation sources evoke more outrage than trustworthy sources, and that outrage facilitates sharing — people will share content they know is inaccurate if it signals their moral position or group loyalty. That’s not a platform failure. That’s the business model working exactly as designed.
The Brennan Center for Justice obtained 160,000 email alerts that Dataminr — a social media monitoring company with access to the platform formerly known as Twitter’s full data stream — sent to DC police over a two-year period. The alerts tracked planned demonstrations, individual protest organizers, the movements of marches in real time. One alert included the social media profile of a recent college graduate with fewer than 100 followers who had shared an event announcement.
That monitoring infrastructure runs on a firehose of content, and the content it processes most efficiently is the content that generates the most engagement — which is the content that triggers the strongest emotional responses. The surveillance infrastructure and the misinformation pipeline are not separate systems. They feed each other. Outrage produces data. Data enables targeting. Targeting produces more outrage.
If you can be manipulated into clicking, sharing, or believing false information, you can be manipulated into compromising your security. A convincing phishing email works because it triggers an emotional response — urgency, fear, curiosity — that overrides the careful habits you’ve been building. Misinformation works the same way at scale.
In Ender’s Game, there’s a subplot most people forget. Between the Battle Room exercises — the tactical training everyone remembers — Ender plays something called the Mind Game. It’s a psychological simulation that adapts to the player, presenting scenarios with no obvious right answer. The game watches how you respond. It learns what makes you react. And at a critical point, Ender can no longer tell whether the game is testing him or he’s testing it — whether he’s inside a simulation or something real.
That’s the information environment you’re living in right now. Content designed to provoke a reaction. Systems that learn what makes you click. And a diminishing ability to tell what’s real from what’s been engineered to feel real. The Mind Game was never about winning. It was about whether you could maintain your judgment when everything around you was designed to manipulate it.
The difference is that Ender didn’t have a method. You do.
Here’s the framework. It’s called SIFT, developed by digital literacy researcher Mike Caulfield. Four steps.
Stop. Before you share, react to, or act on a piece of information, pause. That’s it. The single most effective intervention against misinformation is a thirty-second delay between encountering a claim and doing anything with it.
Investigate the source. Who originally published this? Not who shared it — who made the claim? A credible person citing their expertise? A website you’ve never heard of? An account created last month? Don’t evaluate the claim yet. Evaluate the claimant.
Find better coverage. If the claim is real, other sources will be reporting it. Search for the claim — not the article, the underlying claim — and see who else is covering it. If a major event is reported by only one source or one political orientation, that’s a signal. If you can find the claim reported across multiple outlets with different perspectives, the core facts are more likely solid even if the framing varies.
Trace claims to their origin. If a claim cites a study, find the study. If it quotes a person, find the original quote in context. If it references a document, find the document. Most misinformation isn’t fabricated from nothing — it’s real information stripped of context, reframed, or selectively quoted. Following the chain back to the original source often reveals what was left out.
This is called lateral reading — leaving the source to check what other sources say about it, rather than reading deeper into the source itself. Professional fact-checkers consistently outperform PhD-level experts at evaluating information, and lateral reading is why. They don’t try to evaluate a source by studying it. They check what others say about it. It’s what I expect you to be doing while reading this book.
A printable reference card for the SIFT framework is available in the companion materials — small enough to keep near your computer or tape to a wall.
Now go back to the claim you found at the beginning of this chapter. Apply SIFT. Write down what you find at each step in your field journal.
Did the ground get more solid, or less?
That question is the core design principle of everything I’ve been writing. Everything I’ve told you in these chapters is verifiable. Court filings. Congressional records. Published research. Government procurement documents. If you check my claims and the ground gets less solid — walk away.
Set up a family code word. AI voice cloning can now produce a convincing replica of someone’s voice from as little as three seconds of audio. Voice phishing attacks — calls impersonating family members claiming emergencies — surged over 400% in 2024-2025, and deepfake video scams rose 700% in the same period. In one documented case, a Florida woman lost $15,000 after receiving a call from what sounded exactly like her crying daughter claiming she’d been in a car accident. The voice was AI-generated. Agree on a code word with your immediate family or close contacts that you’d use to verify identity over the phone. Something you’d never say in normal conversation. Something not findable in your social media posts. This takes five minutes and it’s one of the simplest defenses against the most effective new social engineering attack.
Practice SIFT on two more claims this week. Pick them from different sources — one from a source you trust, one you don’t. Apply the full framework. Record the process and findings in your field journal. The point isn’t to debunk anything. It’s to build the reflex.
Bookmark primary source repositories. These are where you go when you want to verify a claim at the source, not through someone else’s summary. Court records: PACER (pacer.uscourts.gov). Congressional records: congress.gov. FOIA reading rooms: fbi.gov/vault, the NSA’s declassified documents page. State and local court records through your state judiciary’s website. These aren’t sources you’ll use every day. They’re sources you’ll be glad you bookmarked when a claim matters.
You now have the tools to see clearly — your data, your communications, your identity, your browser, your information diet. The next question is: how do you maintain all of this without burning out?
Because the number one reason people abandon security practices isn’t that they don’t care. It’s that they’re exhausted.
Summary
Your mind is as much an attack surface as your devices. Misinformation exploits outrage, urgency, and emotional reactivity — the same psychological triggers used in phishing and social engineering.
Action Items
- Apply SIFT to the claim you found at the beginning of this chapter — record each step and your findings in your field journal
- Practice SIFT on two more claims this week, one from a source you trust and one you don’t
- Set up a family code word for phone identity verification — something never said in normal conversation and not findable on social media
- Bookmark primary source repositories: PACER (pacer.uscourts.gov), congress.gov, fbi.gov/vault, NSA declassified documents, your state judiciary’s court records site
Case Studies & Citations
- Vosoughi, Roy, & Aral (MIT, 2018) — Study of 126,000+ stories on Twitter found falsehoods 70% more likely to be retweeted than true stories and reached first 1,500 people six times faster. Published in Science. Effect driven by novelty and emotional response (surprise, disgust), not bots.
- Brady et al. (2024) — Study published in Science across eight studies and two experiments found misinformation sources evoke more outrage than trustworthy sources, and outrage facilitates sharing even when users know content is inaccurate — to signal moral position or group loyalty.
- Brennan Center / Dataminr — 160,000 email alerts sent to DC police over two years tracking planned demonstrations, protest organizers, and march movements in real time. Included social media profile of college graduate with fewer than 100 followers who shared an event announcement.
- AI voice cloning / deepfake surge — Voice phishing attacks surged over 400% in 2024-2025 (multiple sources including FBI alerts, BlackFog, Pcdn). Deepfake video scams rose 700% in 2025 (ScamWatch HQ, Gen Threat Labs). McAfee 2024 study found 1 in 4 adults experienced an AI voice scam. A Florida woman lost $15,000 to a cloned voice impersonating her daughter (WFLA, 2025).
- SIFT method — Developed by Mike Caulfield, digital literacy researcher. Four-step framework: Stop, Investigate the source, Find better coverage, Trace claims to origin. Built on lateral reading — the practice of leaving a source to verify it externally rather than reading deeper into the source itself. Professional fact-checkers outperform PhD-level domain experts using this approach.
Templates, Tools & Artifacts
- SIFT Method — Stop → Investigate the source → Find better coverage → Trace claims to origin. Apply to any claim before sharing or acting on it. Record results in field journal.
- Download: SIFT Framework Reference Card
- Family Code Word Protocol — Agree on a word or phrase with immediate family/close contacts for phone identity verification. Requirements: never used in normal conversation, not findable in social media posts, shared only in person or via encrypted channel. Use when receiving any unexpected urgent call requesting money or action.
- Primary Source Repositories — PACER (pacer.uscourts.gov) for federal court records. Congress.gov for congressional records and legislation. FBI Vault (fbi.gov/vault) for FOIA documents. NSA declassified documents page. Your state judiciary website for state/local court records.
- Lateral Reading — Verification technique: instead of reading deeper into a source to evaluate it, leave the source and check what other sources say about it. Search for the claimant, not just the claim.
Key Terms
- SIFT — Four-step information verification framework (Stop, Investigate the source, Find better coverage, Trace claims to origin) developed by digital literacy researcher Mike Caulfield.
- Lateral reading — The practice of leaving a source to check what other sources say about it, rather than evaluating a source by reading deeper into it. The method that distinguishes professional fact-checkers from domain experts.
- Voice phishing (vishing) — Phone-based social engineering using AI-cloned voices to impersonate family members, executives, or officials. Exploits urgency and emotional response. Surged over 400% in 2024-2025.
- Deepfake — AI-generated synthetic media (audio, video, or images) designed to convincingly impersonate real people. Consumer-grade tools now produce realistic results from seconds of source audio or a few images.
- Family code word — A pre-agreed verification phrase shared only among trusted contacts, used to confirm identity during unexpected phone calls. Effective defense against voice cloning because the AI can only reproduce voice, not knowledge the cloner doesn’t have.