The Intercept recently published a long but interesting article about (among other things) CIA-connected researchers' attempts to subvert Xcode, the compiler used to create most iOS and Mac applications. If you can subvert a compiler and then get software developers to use your version, then you can make those developers' apps do just about anything you want, including things like giving you access to people's phones and private data.
It's not clear how the researchers would get developers to use the subverted version; it's entirely possible that this is just a description of a theoretical possibility, not an actual attack that's been carried out. I imagine that Apple's compiler team is very careful to prevent a situation where (for example) one of their engineers is working for the CIA and inserts malicious code into the compiler.
But the general possibility of a subverted compiler is very real, and if it were to happen, it could be largely undetectable; see Ken Thompson's classic discussion (written in 1984, based on an idea from 1974) of how to subvert a compiler undetectably, “Reflections on Trusting Trust.”
Bruce Schneier wrote in 2006 about a then-recently-discovered way to detect “Trusting Trust”-style compiler attacks. (Note that most of the comments on that article misunderstand the outlined procedure, so the discussion may not be as clear as Schneier intended it to be.) I'm hoping that companies that create compilers use a detection system along these lines.
But even so, attacks on compilers still seem to me like a pretty scary scenario. As I wrote a couple years ago, the whole class of threats that are completely invisible and largely undetectable seems to me to be a scary and troubling one.