What Are You Actually Protecting?

You’ve seen your location history — every place you’ve been, timestamped, waiting. You’ve seen the pipeline — how your weather app feeds data through an auction system to brokers who sell it to anyone who pays, including law enforcement agencies that never bothered with a warrant. You deleted your advertising ID. You audited your app permissions. If you did the research exercise, you found the name I didn’t give you, and you understand now that commercially available data can reconstruct anyone’s private life from the patterns their phone leaves behind.

There’s a scene in Ender’s Game — the book, not the movie — where Ender enters the Battle Room for the first time. Zero gravity. No map. No briefing on the enemy’s position. Most of the kids start flailing, trying to orient themselves to the room as if there’s still an up and a down. Ender’s instinct is different. He reorients. He picks a direction and decides: the enemy’s gate is down. He defines the terrain relative to his objective, not relative to what’s comfortable.

That’s what I need you to do now. Focus on the objective.

Everything I’ve shown you so far has the same problem: it’s general. The surveillance pipeline affects everyone. The advertising ID was on every phone. The data brokers sell everyone’s data. That’s true, and it’s also the reason most people hear about this stuff and do nothing — because “everyone is affected” feels the same as “no one can do anything.” It’s paralyzing.

The way through paralysis is specificity. Not “surveillance is bad” — that’s abstract. Instead: what am I actually protecting, and from whom?

Those questions have an answer. It’s called a threat model.

It’s not a fancy term. It’s not paranoia. It’s the exercise of sitting down and mapping your specific situation — your risks, your adversaries, your vulnerabilities — before a crisis forces you to figure it out under pressure. Security professionals do this. Journalists do this. Lawyers do this for their clients. I write them at work — they’re boring documents, spreadsheets mostly. Risk on one axis, likelihood on the other, mitigation steps in the third column.

The fact that ordinary people don’t do this isn’t because it’s hard. It’s because no one ever told them it was valuable.

I spent time with the EFF’s Surveillance Self-Defense project — the Electronic Frontier Foundation, a nonprofit that’s been defending digital rights since 1990. They developed a framework that boils threat modeling down to five questions. I’m going to give them to you, and then I’m going to show you why they matter with a story about an innocent man who just wanted to ride his bike.

Five questions. Write them in your field journal. Answer honestly.

One: What do I have that’s worth protecting?

Not just “my data” — be specific. Your location patterns. Your communications with specific people. Your browsing history. Your medical records. Your financial information. Your political activity. Your relationships. Think about which pieces of your life, if exposed to the wrong person, would cause real harm.

Two: Who might want access to it?

This is where most people’s thinking stops too early. You might think “hackers” or “the government” and leave it there. Be more specific. An abusive ex-partner. An employer who monitors social media. A data broker selling to anyone who pays. A law enforcement agency using a geofence warrant. A neighbor with a grudge and a people-search website. Your threats are specific to your life. Name them.

Three: How likely is it that I’d need to protect it?

Some threats are theoretical. Some are already happening. If you’re a teacher and a parent has already emailed you threats, that’s not hypothetical — that’s a present threat and it changes your entire calculation. If you’re someone who occasionally attends protests, the likelihood that your location data matters to law enforcement is not zero, but it’s different from the likelihood facing a full-time organizer. Be honest about where you actually are, not where you might be someday.

Four: How bad would it be if I failed?

This is the question that makes the whole exercise real. For some threats, the consequence of failure is an awkward conversation. For others, it’s job loss. For others, it’s physical danger. A data breach that exposes your email password is annoying. A data breach that exposes your home address to someone who’s threatened you is life-threatening. The severity determines how much effort the protection is worth.

Five: How much trouble am I willing to go through to prevent it?

Security always has a cost — in time, in money, in inconvenience. The perfect security posture is the one you’ll actually maintain. If a recommendation is too burdensome to follow consistently, it’s worse than a simpler one you’ll stick with. This question keeps you honest. You’re building something sustainable, not performing security theater for a week before going back to your old habits.


Here’s why this matters in practice.

In March 2019, a man named Zachary McCoy went for a bike ride in his neighborhood in Gainesville, Florida. He used an app called RunKeeper to track his mileage — the same kind of fitness tracking millions of people do without thinking about it. His route happened to loop past a house that was burglarized that same day.

Ten months later, he got an email from Google. Local police had served a geofence warrant — a request that told Google to hand over information on every device that was near a specific location during a specific time window. McCoy’s phone, broadcasting his location to Google through his fitness app, had been in the area. He was now a suspect in a burglary he knew nothing about.

He had seven days to go to court or Google would release his account information to police.

McCoy wasn’t a criminal. He wasn’t an activist. He wasn’t a person anyone would describe as “high risk.” He was a guy who rode his bike and used an app to count the miles. He had to go to his parents, explain what was happening, and they dipped into their savings to hire a lawyer. The lawyer challenged the warrant. The police eventually withdrew it — but McCoy spent thousands of dollars and months of anxiety proving that his bike ride was a bike ride.

Here’s the thing: if McCoy had sat down before any of this happened and answered those five questions, his assessment would have been reasonable and low-key. I’m not a public figure. I’m not an activist. My main digital risk is the usual stuff — breaches, spam, maybe identity theft. And his conclusion — I don’t need to go to extreme lengths — would have been perfectly rational.

But he also would have known that his fitness app was sharing location data with Google. He would have understood that location data could be swept up in warrants he’d never know about. And he might have turned off location sharing for that one app, or used one that stores data locally, or simply understood the risk he was accepting. Not because he was paranoid. Because he’d thought about it.

That’s the difference a threat model makes. It doesn’t tell you to lock everything down. It tells you what you’re choosing to leave open, and makes that choice conscious instead of invisible.


Most people, working through these five questions honestly, will land somewhere I’d call Tier 1, the lowest level. Tier 1 means you face the baseline risks that come from existing in a surveillance economy. Data brokers have your information. Your accounts have probably been breached. Your location history exists somewhere. The steps you’ve already taken — deleting your advertising ID, auditing app permissions — are Tier 1 responses. The steps coming in the next chapters — password security, encrypted communications — are also Tier 1. They’re the foundation everyone should have regardless of their situation.

Some of you will recognize that your situation puts you in Tier 2. You attend protests. You work with vulnerable populations. You’re a journalist, or an organizer, or a teacher in a school district where parents have made threats. You have an ex-partner who’s shown they’ll cross boundaries to find information about you. Tier 2 means the baseline isn’t enough — you need additional measures tailored to your specific risks, and we’ll get to those.

A few of you will know you’re in Tier 3. You know who you are. Your threat model includes sophisticated adversaries or situations where operational security is the difference between safety and physical harm. Later chapters address this, and I’ll be honest about where my expertise ends and where you need specialized guidance.

---
title: Threat Model Tiers
---
flowchart TD
    subgraph T1["TIER 1 — BASELINE"]
        T1_who["WHO: Everyone"] --- T1_threats["THREATS: Data brokers, account breaches, location tracking"] --- T1_chapters["CHAPTERS: 1–12"]
    end

    subgraph T2["TIER 2 — ELEVATED"]
        T2_who["WHO: Activists, journalists, teachers, organizers, hostile exes"] --- T2_threats["THREATS: Targeted surveillance, geofence warrants, social media monitoring"] --- T2_chapters["CHAPTERS: Level 1 + 14, 19, 22, 28"]
    end

    subgraph T3["TIER 3 — HIGH"]
        T3_who["WHO: Sophisticated adversaries, physical safety at stake"] --- T3_threats["THREATS: State-level surveillance, infiltration, advanced forensics"] --- T3_chapters["CHAPTERS: All previous + specialized guidance beyond this book"]
    end

    T1 ~~~ T2
    T2 ~~~ T3

For now, the work is the same regardless of your tier.

Spend fifteen minutes. Open your field journal. Answer the five questions. Be specific and honest. It’s not a test. It’s a training exercise.

When you’re done, you’ll have something most people never bother to create: a clear picture of what you’re protecting, from whom, and why. Every recommendation I make in the chapters that follow connects back to this framework. When I say “this matters more if you’re Tier 2,” you’ll know whether that’s you. When I say “this is probably overkill unless you’re Tier 3,” you’ll know whether to skip it or pay attention.

The threat model is the foundation. Everything else builds on it.


No matter what you wrote down, there’s one thing that protects you at every level. It’s simple but most people still haven’t done it.

Come back when you’ve completed your threat model. We’ll fix it.


Summary

Surveillance affects everyone, but your risks are specific to your life. A threat model is the exercise of identifying what you’re protecting, from whom, and how much effort the protection is worth. The EFF’s five-question framework turns abstract anxiety into a concrete, personal assessment — and every security recommendation in the chapters ahead connects back to it.

Action Items

  • Answer the five threat-modeling questions in your field journal: (1) What do I have worth protecting? (2) Who might want access? (3) How likely is it? (4) How bad would it be? (5) How much trouble am I willing to go through?
  • Be specific — name actual data, actual people, actual scenarios rather than generalities
  • Identify your tier (1, 2, or 3) based on your honest assessment
  • Record your threat model in your field journal — this is the framework every future chapter builds on

Case Studies & Citations

  • Zachary McCoy (Gainesville, FL, 2019–2020) — Cyclist identified as a burglary suspect after a geofence warrant swept up his fitness app location data. Spent thousands in legal fees to prove his innocence. Never charged. Reported by NBC News and the New York Times.
  • EFF Surveillance Self-Defense — The Electronic Frontier Foundation’s open-access guide to personal digital security, including the five-question threat modeling framework used in this chapter. Available at ssd.eff.org.

Key Terms

  • Threat model — A structured assessment of what you’re protecting, who might want access to it, how likely the threat is, how severe the consequences would be, and how much effort you’re willing to invest in protection. The foundation for all personal security decisions.
  • Geofence warrant — A legal request that compels a technology company (typically Google) to hand over information on every device that was near a specific location during a specific time window — sweeping up everyone in the area, not just suspects.
  • Tier 1 / Tier 2 / Tier 3 — A rough classification of personal risk levels. Tier 1: baseline risks from living in a surveillance economy. Tier 2: elevated risks from activism, journalism, teaching, or personal situations involving hostile actors. Tier 3: risks involving sophisticated adversaries where operational security is a safety issue.