The scariest threats are the invisible ones

Three items that feel thematically related to me:

  1. I've been concerned for a decade or so about the possibility of someone hacking voting machines. The thing that scares me most about that possibility is that some of the hacks I've seen suggested are essentially undetectable. This could already have happened; we have no way to find out.

    (And yes, I've seen the TV series where that's a major plot element (which I'm not naming 'cause this is a spoiler); I'm glad the issue is getting attention, but imo the show isn't following through enough on exploring the consequences beyond the effects on the major characters.)

  2. A few years ago, I found out about National Security Letters, and the gag orders that usually come with them. (More recently, I found out that there are also FISA Court orders that come with gag orders.) I felt shocked and outraged and helpless. The idea of being legally forced to hand over info to the government is distressing but not unheard-of; but the idea that you can't legally tell anyone that that's happened, not even (say) your spouse, is nightmarish and Kafkaesque. I put together a blog entry about this a while back, but then I read that the rules had changed, and I held off on posting; but it turns out that gag orders still frequently accompany NSLs, and I'm still horrified at a deep gut level.

  3. The other day, someone pointed to Ken Thompson's 1984 (!) piece Reflections on Trusting Trust. I had read it before, but had forgotten about it or didn't think it was that big a deal. But now I do think it's a pretty big deal, in the same kind of way as the above items: It may've already happened, and we wouldn't know about it. The gist of it is that you can invisibly subvert tools that are used to make other software, in such a way that all software made with the subverted tool is also subverted. And the tool is used to make other tools, too.

    (Slightly more technical version of that summary: It's possible to add malicious code to a compiler (such as code providing a black hat with permanent backdoor access to any software built with that compiler), and to make the compiler automatically and invisibly add that malicious code to any future versions of itself. Then you remove the malicious code from the source code for the compiler. This is undetectable.)

    Thompson wrote that piece thirty years ago, basing it on an idea that had been around for ten years before that. Around 2006, someone came up with a clever way to detect that attack (see Schneier's explanation). But even so, for somewhere between twenty and forty years, the core tools of the worldwide software industry could have been invisibly subverted, using a widely known but undetectable attack.


  4. If we're lucky, nobody has actually implemented the voting-machine attack or the “trusting trust” attack in a way that's made a significant real-world difference to anything. But how would we know?

Join the Conversation