On 13 January, the Guardian newspaper published a story on a supposed WhatsApp “backdoor” allowing “snooping on encrypted messages”. The article was based on a video demonstration by UC Berkeley graduate student Tobias Boelter of how WhatsApp handles a change in a user’s encryption key, as happens, for example, when a user switches to a new phone.
The Guardian is still hawking this as an “exclusive” story, noting how a number of privacy advocates they contacted have reacted with grave concern to the claim of a “backdoor”, going so far as to describe it as a “huge threat to freedom of speech”. Interestingly, both the Guardian and Boelter have since tried to walk back their claims of a “backdoor”. The Guardian had a swift response to the publication of its article by WhatsApp, who sent them a denial that they had created, or ever would create, a backdoor usable by government; after including this statement, the Guardian editors acknowledged this amendment in a note at the bottom of the article. What they, to this date, have not acknowledged is that they simultaneously changed half a dozen occurrences of the word “backdoor” into “vulnerability”, including in the headline. Boelter, for his part, professes to “find the ‘Backdoor vs. Vulnerability’ discussion uninteresting”—something to which he might profitably have alerted his two-day-younger self, whose video on YouTube still bears a title with exactly those words.
Now, there are two aspects of wider import in this story, which I will discuss in turn below. One is that the behaviour of the WhatsApp app is neither a backdoor nor much of a vulnerability. Reporting has, so far as I am aware of, neglected to look at exactly how this behaviour could conceivably be exploited in any meaningfully practical way. Any serious discussion about surveillance and communications security should, however, include this aspect. Another is the recklessly uncritical behaviour of the Guardian journalists, who not only ran with a story they, by all appearances, barely understood, in order to publicise a headline claim they later surreptitiously changed—behaviour that is, in fact, highly unethical.
First, a look at how WhatsApp handles the encryption of its chats. WhatsApp uses Public Key encryption; to better appreciate the general principle of this kind of encryption, we’ll look at a form of this approach that makes use of a fundamental property of numbers. (See Addendum #5, though.) The principle is easy enough to understand: every positive whole number either a) is a prime number (ie, divisible only by itself and 1) or b) is the product of a unique set of prime numbers, its prime factors. Add to this the fact that for factoring a given large number (ie, finding its unique set of prime factors), you would (and I’m simplifying) basically have to try every possible combination of factors—which, for Very Large numbers would take all the computers in the world thousands of years. Think about this for a minute and imagine two Large prime numbers (A and B) and their Very Large product (P). You immediately know that only A and B produce P (the prime factors of any non-prime number form a unique set); you also know that it’s practically impossible to find A and B if you only know P (prime factorisation is hard).
Now, in any Public Key–encrypted conversation, everybody has their own private key (which you can think of as a pair of numbers A and B) as well as everybody else’s public key (a Very Large P). Encrypting something with somebody’s public key ensures that only they can decrypt it (because the factors are unique), and you can safely publish that public key because it’s practically impossible to work back from it to your private key. The first obvious problem is where to store those keys. As easy as it should be for anybody to get your public key, it should be impossible for them to get your private key. In the case of a messaging app, including WhatsApp, that handles key management, you are entrusting the app with your private key—which they promise only ever to store locally on your device—while simultaneously expecting the app to automatically distribute your public key to any and all of your contacts, in order for everybody to be able to message you with as little hassle as possible.
What happens, then, if you get a new phone or reinstall WhatsApp for any other reason? It’ll generate a new pair of public and private keys and will distribute the new public key to all your contacts. In such a system of keys being automatically managed by an app, there arises the question of whether and how to alert users to such a change in encryption keys. Most of these key changes will be innocuous: people get new phones all the time. But it is, of course, possible that some nefarious agent (let’s call him Smith) has invaded the system, is masquerading as your good friend Thomas Anderson, and (via a WhatsApp server) just sent your app a public key that superficially looks like Thomas’s but in fact belongs to Agent Smith. If you were not alerted to the key change, you might keep sending Mr Anderson messages you keep thinking can be read only by him.
In order that this doesn’t happen, your app might show you an alert whenever one your contact’s encryption keys has changed. (WhatsApp can do this, but the setting is off by default.) That would enhance security, of course, but there would still be the problem of what to do with messages you tried to send to Mr Anderson while his phone was off and Agent Smith’s phone hadn’t yet come online to announce the key change to the WhatsApp server. Your app can do one of two things when it is alerted of a key change by the server: it can hold the messages back and ask you for explicit confirmation before resending them, which is what the Signal messaging app does; or it can resend the messages immediately (after showing you an alert, if you have them switched on), which is how WhatsApp behaves. This last behaviour means prioritising hassle-free and undelayed message delivery over safeguarding against a potential threat. (For criticism of that behaviour see below, Addendum #2.)
Enter Mr Boelter. In his video, he entirely accurately shows WhatsApp’s behaviour of resending undelivered messages after a key change, which is called “non-blocking”. This is the behaviour WhatsApp is designed to show, which fact is neither new nor has it ever been a secret. Mr Boelter shows this behaviour by physically switching out the SIM card from one phone into another—as you would when you get a new phone. But in his video, which per its title purports to show a “backdoor/vulnerability”, he keeps telling us that this behaviour would be something that, for example, a government might be able to exploit. This idea of it being a backdoor, however, whose practical feasibility neither Mr Boelter nor indeed the Guardian so much as mention in passing, is entirely fanciful—or, in the rather more colourful language employed by Frederic Jacobs, a former developer for Signal: it is “major-league fuckwittage”.
Why is this behaviour completely impractical to be exploited by, say, a government Agent? If that Agent was dealing with security-aware users who have their app set to show key change notifications, “they would risk getting caught by users who verify keys”, explains Moxie Marlinspike, co-inventor of Signal’s encryption protocol, which is also used by WhatsApp. “Any attempt to intercept messages in transit by the server is detectable by the sender”, says Marlinspike, which would be a very good prima facie reason not to use this route for actual snooping. But there are even more reasons to be sceptical of Boelter’s claims.
If we were to assume, as Boelter seems to be doing in his video, that our nefarious Agent would have to have physical access to a target’s phone in order to steal the target’s SIM card, it seems odd that Boelter wouldn’t mention that this scenario would depend on the user not noticing that his phone’s SIM card was missing—or indeed the phone itself. But suppose, for the argument’s sake, that our Agent was capable of spoofing SIM cards, as governments in fact are, then it wouldn’t be necessary to have physical access to the user’s phone. That, however, just leads to the next problem Boelter (and, yes, the Guardian) conveniently neglected to mention. Any time a user tries to communicate with the WhatsApp server using a particular phone number (whether as encoded in their genuine SIM card or spoofed by an Agent), the server compares the user’s devices key to the key stored on the server. If they don’t match, the user is again asked to verify their phone number and a new key pair is generated. This severely limits the time window during which an attack as envisioned by Boelter could happen. But let’s suppose, again for the sake of the argument, that an Agent is capable of keeping the recipient offline for an extended amount of time. The Agent would still have to ensure that exactly at this time the sender tries to send sensitive material, before the Agent connects to the server with his spoofed SIM card and receives the messages the sender’s phone automatically resends. Of course, the Agent would also have to make sure not to connect to the server too soon, for then the sender might not yet have tried to send anything.
In any of these cases, regardless of the considerable lengths an Agent would have to go to in order to even have a chance of exploiting WhatsApp’s non-blocking behaviour, there would be no guarantee that the Agent would be able to skim off even a single message. Even worse: In any of these cases, even when the Agent did successfully skim off some messages, any security-aware user would almost instantly be able to tell, by comparing keys, that their conversation had been tampered with. Given these constraints, any putative surveillance operation trying to exploit WhatsApp’s non-blocking behaviour would have to be a one-off. Neither would it ever again work against a security-aware user nor would the operation stay secret; the world would know in no time that this kind of thing was going on. As such, there is simply no practical scenario in which WhatsApp’s behaviour could be systematically exploited, let alone be used as a permanent backdoor.
Of course, it is possible to imagine scenarios where a nefarious agent, for example, is able to spoof delivery notifications displayed on a sender’s phone, so that they keep sending messages our Agent is able to intercept—this is something Boelter suggests might be possible. But even putting to one side the problem that comparing keys would still reliably expose this scheme, this would assume that an Agent can not just eavesdrop on data transmissions—either criminally or with a technically legal warrant—but that he has the capability to inject code or other instructions into the server at will. However technically possible such a scenario is, it has nothing to do with criticism of any particular implementation of any given security protocol. If your security is breached that deeply, then by definition nothing can be secure. But in that case, the Agent could conceivably just as well instruct a client app, in contravention of the security protocol ostensibly adhered to, to send them a user’s private key. (For a caveat, see Addendum #4.) That way, there wouldn’t even be any immediate way to detect the interference. Put simply: If breaches that deep are feasible, then the most moronic Agent in the world wouldn’t try to use the messy exploit of WhatsApp’s non-blocking behaviour—or indeed need to.
Finally, there remains the question: Why didn’t the Guardian’s author, or indeed any Guardian editor, think to check up on any of the problems with Boelter’s claim that are apparent to anybody with a passing acquaintance with cryptography and a critical mindset? Did they try and contact the makers of the Signal protocol? As Marlinspike says in his blog post, the Guardian not only didn’t contact the Signal people, they didn’t even reply when contacted by them.
What the Guardian did do was to take Boelter’s claims at face value and publicise them without any critical checking. And in order to generate the desired attention, they then confronted some privacy activists with the uncritically repeated claims of a “backdoor”, and some of those acticists dutifully provided them with quotable outrage about how this was endangering people’s lives. This the Guardian then marketed as an “exclusive” story. Of course, there is nothing exclusive about this except the manufactured outrage of the people these “journalists” duped into colluding with them, but nobody at the Guardian seems to be overly troubled by that.
And as the icing on the cake, there is the “amendment” to the article, in which the Guardian acknowledge that WhatsApp had made an official statement saying that there was no backdoor. Solely on the basis of this denial by an interested party (see below, Addendum #3), the article was changed so that all occurrences of “backdoor” were replaced by “vulnerability” and a paragraph containing the official WhatsApp statement was inserted. The amendment only acknowledges the insertion. As of this moment, the Guardian are still hiding from their readers the fact that the article used to make the claim, including in its headline, that there was a backdoor in WhatsApp. Millions of people still believe this claim, not least, one will have to admit, because scores of other “journalists” mindlessly repeated the Guardian’s claim just as the latter did Boelter’s. To have made the claim in the first place was utterly irresponsible in its lack of critical attitude; not to issue a formal correction for it but to change it surreptitiously is highly unethical. And the combination of serving as an unwitting stooge for claims the “journalists” really only relayed, not unlike stenographers, and showing zero accountability in handling their mistake should be a warning example to the whole profession.
- Bruce Schneier adds a valuable aspect to the discussion: “How serious this is depends on your threat model. If you are worried about the US government—or any other government that can pressure Facebook—snooping on your messages, then this is a small vulnerability. If not, then it’s nothing to worry about.”
- It would be entirely fair to pressure WhatsApp/Facebook to enable key change notifications by default, or to question the wisdom of even giving people the option to switch them off. It would be just as fair to ask WA/FB to change the apps behaviour to blocking, as in Signal. But the UX considerations cited by WA/FB and Moxie are also fair and deserve to be taken seriously. Making users more aware of security issues deserves careful thought; so does getting and retaining those users.
- It’s as if at the Guardian they had never heard about how US newspapers were complicit in covering up torture, by stopping to call it torture because the term had become “contentious”—ie, the government had issued a denial—and that journalists, instead of implicitly aspiring to a naive and incoherent concept of objectivity (which journalism professor Jay Rosen has dubbed The View from Nowhere), have a duty to pursue the kind of objectivity that comes from testing ideas against relevant facts and coming to an independent conclusion.
- Barring an exploitable bug in the client app, this would usually necessitate rolling out an updated app version. The example here is mainly meant to illustrate the fact that an attacker who already has deep access to a provider’s systems has infinitely more appealing attack routes at their disposal than WhatsApp’s key change handling behaviour.
- WhatsApp in fact uses elliptic-curve cryptography, whereas our prime factorisation example would be characteristic of the now almost obsolete RSA encryption system. The general principle, however, is the same.