Security Culture Is Care
You’ve had your first meeting. Whatever happened — however awkward or stilted or surprisingly good it was — you’re a group now. Three people with a shared purpose, ground rules, and the beginning of a rhythm.
Now I need to talk about something I know well, and it matters more at group scale than it did when there were only two of you.
Back in Chapter 14, I asked you and your partner to build a shared security floor. You talked about threat models, agreed on communication channels, set disappearing messages, verified Safety Numbers. That was security for two people.
Three is different. Not three times harder — categorically different. The economist Jack Hirshleifer described this as the weakest-link problem: your group’s security is defined by its least-secure member. Not the average. The minimum. If two of you are on Signal with strong passcodes and disappearing messages, and one person is texting from a phone with a four-digit PIN and notification previews visible on their lock screen, your group’s security is that phone.
This isn’t a reason to blame anyone. It’s a reason to build what I’m going to call security culture — and I want you to hear that phrase carefully, because it doesn’t mean what you might assume.
Security culture isn’t a checklist of tools. It’s not paranoia and it’s not a set of rules handed down from someone who knows better. Security culture is a set of agreements a group makes about how they protect each other. The emphasis is on each other. When you configure disappearing messages on the group chat, you’re not protecting yourself — you’re protecting everyone in the room. When you use a strong passcode, you’re making a decision about other people’s exposure. When someone gently says “hey, your notification previews are showing our messages,” that’s not surveillance. That’s care.
Security as care. I planted that phrase a few chapters ago. This is where it matters.
In Dune, the Fremen don’t survive Arrakis through superior weapons or technology. They survive through sietch discipline — an entire culture of practices built around protecting the community. Water discipline isn’t paranoia about scarcity. It’s love expressed as conservation: every drop you save is a drop that sustains someone else. The stillsuits, the thumper protocols, the way they move through open desert without leaving tracks — none of this is imposed by a central authority. It’s learned, agreed upon, and maintained because every member of the sietch understands that one person’s carelessness threatens everyone’s survival.
That’s security culture. Not a set of rules enforced from above. A set of agreements maintained from within, because the group understands that protection is mutual.
Elinor Ostrom won the Nobel Prize in Economics for studying how groups manage shared resources — forests, fisheries, irrigation systems. Her central finding applies directly to what you’re building: groups that make their own rules follow them better than groups that receive rules from the outside. The security practices I’ve been recommending throughout these chapters work. But they work best when the group adopts them by agreement, not because I said so.
This is why your next meeting includes building a group security floor — together, as a decision, not as an assignment from a book you found on the internet.
Here’s what to establish. Think of it as four conversations, and have them at your next meeting. Write the agreements down afterward.
Communication platform. If all three of you aren’t on Signal yet, fix that at this meeting. Sit together and set it up. This is what the EFF’s Security Education Companion calls a “setup party” — you walk through the configuration as a group so no one is left figuring it out alone. Enable disappearing messages for the group chat — one week default. Verify Safety Numbers with each person using the QR code, in person. Not because checking numbers on screen doesn’t work, but because the in-person verification builds a habit of physical trust alongside digital trust.
Information boundaries. What stays in the group? What can be shared with people outside? This conversation is more important than it sounds, because most groups never have it — and the absence of the conversation creates ambiguity that only becomes visible when something goes wrong. Write down what you agree on. “What’s discussed in meetings stays in the group unless we specifically agree otherwise” is a reasonable starting point. Adjust it to your reality.
Breach protocol. What happens when someone’s phone is lost, or a message gets forwarded to the wrong person, or someone forgets to use the secure channel? The answer isn’t punishment. The answer is borrowed from incident response in software engineering, and it has three steps: (a) Acknowledge what happened — no blame. The person who made the mistake is the first one who needs to feel safe saying so. (b) Identify what made the mistake easy. Was the insecure channel still active? Was the group configuration unclear? Was someone rushing? (c) Adjust the group’s practices. Fix the system, not the person. This is called blameless breach response, and it’s the only approach that works over time, because the alternative — blame — teaches people to hide mistakes rather than report them.
Security champion. Assign one person for the first month. This isn’t the most technically skilled person — it’s the most empathetic person, the one who can say “I noticed you’re still using the old group chat — want me to help you switch?” without making anyone feel stupid. The champion stays current on relevant threats, sends gentle reminders when practices slip, and models the behavior they’re asking for. The role rotates monthly. No one owns it permanently.
Write these agreements down alongside your ground rules from the first meeting. This is becoming your group’s operating document.
I will show you what happens without this, and what happens with it.
In 2004, the FBI paid a young woman to infiltrate environmental activist circles. She went by “Anna.” Over the next two years, Anna befriended Eric McDavid and two others. She provided money, transportation, housing — a cabin in Dutch Flat, California. She brought bomb-making information. She encouraged the group toward illegal activity. When the group wavered, she pushed them to commit. When McDavid showed romantic interest, she reportedly used it.
In January 2006, the three were arrested. McDavid was convicted of conspiracy and sentenced to nearly twenty years in prison. He served nine years before prosecutors admitted they’d withheld thousands of pages of evidence — including love letters and records showing Anna had been exempted from a polygraph test. His conviction was overturned in 2015. Anna was paid over $65,000 for her work.
I don’t tell you this to make you afraid of informants. I tell you because the thing that made McDavid’s group vulnerable wasn’t a lack of technical sophistication. It was the absence of any shared agreement about how the group operated. There were no norms about what information was sensitive. No protocol for assessing someone who shows up with unusual resources. No way for a member to raise concerns about another member’s behavior without it feeling like an attack. Anna exploited a group that had no security culture — and in the absence of culture, one person with an agenda could steer the entire group.
Now look at what care looks like in practice.
During the Hong Kong pro-democracy protests of 2019–2020, a movement sustaining hundreds of protest events across more than a year developed security practices that weren’t mandated from any leadership. They emerged from collective agreement — because people cared about each other’s survival.
Protesters abandoned their Octopus transit cards and bought single-journey tickets with cash. They taped unused tickets to kiosks for others who couldn’t afford them. They configured protest phones collectively — factory-reset devices with only encrypted communication apps, no personal data. When police approached, someone would shout “it’s raining!” — a signal for umbrellas to go up, obscuring faces from surveillance cameras.
None of this was imposed. There was no security manual, no central authority. These were agreements a community made because they understood that one person’s carelessness could endanger everyone, and one person’s care could protect the whole group. The practices spread through Signal groups, through Telegram channels, through conversations within the tear gas.
McDavid’s group had no security culture and was destroyed by a single person with an agenda. Hong Kong’s movement had deep security culture and sustained mass collective action under one of the most sophisticated surveillance states on earth.
The difference wasn’t the tools. The tools were available in both cases. The difference was the agreements.
After your next meeting — the one where you establish your security floor — write in your field journal: What did it feel like to have this conversation? Was it harder or easier than you expected? Did anything surprise you about what your group members were worried about?
The security conversation is a trust conversation in disguise. The things people worry about reveal what they’re protecting. Paying attention to that is how security becomes care rather than compliance.
Summary
Your group’s security is only as strong as its least-secure member. Security culture isn’t a checklist — it’s a set of agreements the group makes about how to protect each other. Establish four things at your next meeting: a shared communication platform, information boundaries, a breach protocol, and a rotating security champion. Groups that make their own rules follow them better than groups that receive rules from outside.
Action Items
- Hold a meeting focused on building your group security floor. Have four conversations: communication platform, information boundaries, breach protocol, security champion.
- If anyone isn’t on Signal yet, set it up together at this meeting (“setup party” model). Enable disappearing messages (one week default), verify Safety Numbers in person via QR code.
- Write your information boundaries: what stays in the group, what can be shared outside. Start with “what’s discussed in meetings stays in the group unless we specifically agree otherwise.”
- Agree on a blameless breach protocol: (a) acknowledge — no blame, (b) identify what made the mistake easy, (c) adjust the group’s practices.
- Assign a security champion for the first month. Choose for empathy, not technical skill. The role rotates monthly.
- Add all security agreements to your operating document alongside ground rules from Chapter 18.
- Field journal prompt: What did the security conversation feel like? Was it harder or easier than expected? What surprised you about what your group members worry about?
Case Studies & Citations
- Eric McDavid entrapment (2004–2015) — FBI paid informant “Anna” infiltrated environmental activist circles, befriended McDavid and two others, provided resources and bomb-making information, and encouraged escalation toward illegal activity. McDavid was convicted of conspiracy and sentenced to nearly 20 years. Served 9 years before conviction was overturned after prosecutors admitted withholding thousands of pages of evidence. Anna was paid over $65,000. Applied here: the group’s vulnerability wasn’t technical — it was the absence of any shared agreements about information sensitivity, member assessment, or how to raise concerns. Sources: The Intercept (Mark Schapiro), Democracy Now, Sacramento Bee, Will Potter (Green Is the New Red).
- Hong Kong pro-democracy protests (2019–2020) — Leaderless movement sustained hundreds of protest events across more than a year under heavy state surveillance. Protesters developed collective security practices without central authority: cash transit tickets (with extras taped to kiosks for others), collectively configured protest phones (factory-reset, encrypted apps only), and coordinated signals (“it’s raining!” for raising umbrellas against surveillance cameras). Applied here: security culture as mutual care, not imposed compliance. Sources: Kong Tsung-gan (Medium, ongoing documentation), Natasha Lomas (TechCrunch, 2019), multiple contemporaneous reporting.
- Jack Hirshleifer, weakest-link model (1983) — Public Choice, 41(3). Public goods model showing that collective security is determined by the minimum contribution, not the average. Originally applied to military defense and public safety; subsequently adopted in information security contexts. Applied here: your group’s security equals your least-secure member’s practices.
- Elinor Ostrom, commons governance — Nobel Prize in Economics (2009). Studied how communities manage shared resources (forests, fisheries, irrigation) without top-down regulation. Central finding: groups that design their own rules follow them better than groups that receive externally imposed rules. Applied here: your group’s security agreements should be written by the group, not adopted from a book. Primary work: Governing the Commons (Cambridge, 1990).
Templates, Tools & Artifacts
- Group security floor template — Four agreements to write together: (1) Communication platform: all group conversations on Signal, disappearing messages one week, Safety Numbers verified in person. (2) Information boundaries: what stays in the group, what can be shared outside. (3) Breach protocol: acknowledge (no blame) → identify what made the mistake easy → adjust practices. (4) Security champion: one person, rotating monthly, chosen for empathy.
- Blameless breach response steps — When a security practice is violated: (a) The person who made the mistake reports it without fear of blame. (b) The group identifies the structural factor that made the mistake easy — not the person, the system. (c) The group adjusts its practices to prevent recurrence. Borrowed from software engineering incident response.
- Security champion role description — Stays current on relevant threats. Sends gentle reminders when practices slip. Models the behavior they’re asking for. Says “want me to help?” not “you need to fix this.” Rotates monthly. Not the most technical person — the most empathetic.
Key Terms
- Security culture — A set of agreements a group makes about how they protect each other. Not a checklist of tools or a set of rules imposed from outside. The emphasis is on mutual protection: your security practices are care for the people around you.
- Weakest-link problem — The principle that a group’s security is defined by its least-secure member, not the average. One person with notification previews on and a four-digit PIN defines the group’s exposure, regardless of what everyone else does.
- Blameless breach response — A protocol for handling security mistakes that focuses on fixing the system rather than punishing the person. The goal is to make reporting mistakes feel safe, because the alternative — blame — teaches people to hide problems rather than surface them.
- Security champion — A rotating group role. The champion stays current on threats, sends reminders, and helps members with setup or configuration. Chosen for empathy, not technical expertise. Rotates monthly to prevent informal hierarchy.