For many years I led the technical arm of the Sunlight Foundation, which was, at the time, one of D.C.’s most prominent advocates for government transparency. On the tech side of the house, the transparency mission mostly meant bringing already-open records into the digital age: getting public data out of PDFs and into CSVs and APIs. But we worked closely with colleagues who focused on more philosophical aspects of government transparency, from FOIA policy to putting cameras in the Supreme Court to how the PCLOB prioritized its work. We lived and breathed this stuff. One of our software developers took up the hobby of sketching the un-photographable door to the secret FISA court! For fun!
I’ve spent a lot of time thinking about technology and government transparency. Enough, frankly, to lose some of my faith in it: I now think cameras in the court would be a disaster, and am convinced by arguments about Secret Congress. I am willing to go farther still! Although I now work in the private sector, I do so as part of legal team, and know all too well how annoying it is to opt for a phone call instead of a Slack message–just in case–when working with colleagues to figure out the truth of a foreign compliance requirement or patent troll’s claim.
But the wave of anti-transparency hot takes I’ve seen in response to the Trump administration’s flatly dumbfounding leaks about an airstrike in Yemen goes too far. This includes keen-sighted commentators like Ben Thompson and Dean Ball whose perspectives I reliably find valuable. But they’ve got this one wrong.
Thompson opens by quoting his earlier but still-useful overview of the privacy differences between Signal, Whatsapp and iMessage, noting that Signal is top-tier. Ball is pithier:
Signal is secure and private. We should be fine with policymakers deliberating high-stakes, sensitive things using it.
https://x.com/deanwball/status/1904249766084546962
Having established that Signal is Secure, both then portray the offense committed as one against transparency: that this private chat was not subject to legally-required archival processes, and that keeping records from eventual public inspection in this way is the crux of objections to what these officials did–but that this is an objection that might reasonably be interrogated. “[T]he violation was not about security, but rather transparency,” says Thompson, going on to note that, “It’s worth considering […] if the push for transparency has gone too far.” Ball, is, again, more emphatic: “[C]andidly, America has far too much ‘transparency.'”
This is completely wrong. The problem here is security, not transparency. With apologies to future historians: I don’t care whether discussions of their airstrike are (maybe) made public in twenty-five years. I care about reckless operational practices that harm the America of today. Signal does not offer adequate security for this use case. Employing it in this manner was reckless and, at the very least, imperiled the tactical effectiveness of the U.S. military. If used in similar contexts–which we should assume it is–it could risk American lives.
In my decades consuming cybersecurity news, papers and presentations, I have noticed two things. First, nobody will give you a straight answer about anything without encouraging you to first contemplate your “threat model” (the chakra of the infosec world). Second, the most interesting news, from a geek’s perspective, is confined to a world that often feels quite theoretical: a paper about stealing encryption keys by touching a laptop that only works in a lab; slick websites documenting arcane but powerful exploits that were patched before you ever heard about them. The real-world cases, where people actually get hurt, are much less interesting. Most victims are brought low by phishing or other forms of social engineering. Boring!
When Thompson and Ball say that Signal is secure, they are talking about the theoretical world. Signal’s algorithms have been inspected by experts and deemed pristine. Great. In fairness, the kinds of people in Houthi PC small group are subject to threats from what cybersecurity people invariably call “state actors”: an acknowledgement of the fact that elite, algorithm-level weaknesses are so valuable and difficult to discover that only hacking groups with the discipline, resources, and needs of a nation-state find their cost/benefit attractive. If you merely hope to extort some Bitcoin, human weaknesses are much easier to find and harvest than technological ones. As a result, the world of elite exploits remains mostly invisible to us. We catch glimpses of it leaking out via the activities of American law enforcement and Middle Eastern tyrants: two constituencies that are hapless, reckless, and well-financed enough to sometimes leak these precious secrets into plain view.
It’s great that Signal is–as far as we can know–secure against these kinds of attacks. But: what is your threat model? Is it just this hi-tech, highly interesting, highly sophisticated class of exploit? Or must we also concern ourselves with the dreary practicalities of whether information could be stolen and used by an adversary? If the latter, then it might still be a problem that if a chat participant was using Signal Desktop–which we can’t know or see, and which could easily be done on an unsecured personal laptop of unknown security posture–the conversation’s security will be significantly weakened. It might be a problem that (despite recent advances) most people still use phone numbers for identity on Signal, don’t know how or bother to use safety numbers, and are vulnerable to SIM swaps. It might be a problem that we don’t know if the mobile devices used to run the app had biometrics enabled, adequate PINs, short lock screen timeouts, secure backups, strong cloud passwords, or protocols for lost devices. It might–sorry, no, it clearly is a problem that Signal made it possible to add Jeffrey Goldberg(!) to the chat without adequate confirmation or notification UI, or integration with clearance systems, or any capability for auditing by security professionals working in a support capacity.
(I’m not going to bother talking about SCIFs.)
Taken in isolation, this failure is an inexcusable lapse by people granted our country’s deepest levels of trust. But even more concerning is the context it implies: that insecure systems are being used inexpertly in a widespread manner, and by an administration with a–let’s be generous and say “untested and iconoclastic” perspective on whether the preceding eight decades’ worth of conventional wisdom about America’s allies and antagonists was completely wrong.
It is not enough to say “well, human error is a shame, but I still think it’s a really good app.” It is a really good app! But the criteria for judging the scale of this fuckup is not whether the administration shares your excellent taste in algorithms. It’s whether America’s enemies knew or could have known the where, when, and what of our secret military operations. Good security protocols concern themselves with outcomes. And while human error may be ineradicable, following all those annoying rules really can serve to minimize our chances of succumbing to it.
Discussing secrets this way is reckless. And transparency has nothing to do with it.