In Response to Guardian’s Irresponsible Reporting on WhatsApp: A Plea for Responsible and Contextualized Reporting on User Security

Dear Guardian Editors,

You recently published a story with the alarming headline “WhatsApp backdoor allows snooping on encrypted messages.” This story included the phrasing “security loophole”.

Unfortunately, your story was the equivalent of putting “VACCINES KILL PEOPLE” in a blaring headline over a poorly contextualized piece. While it is true that in a few cases, vaccines kill people through rare and unfortunate side effects, they also save millions of lives.

You would have no problem understanding why “Vaccines Kill People” would be a problem headline for a story, especially given the context of anti-vaccination movements. But your series of stories on WhatsApp does the same disservice and perpetrates a similar public health threat against secure communications.

The behavior described in your article is not a backdoor in WhatsApp. This is the overwhelming consensus of the cryptography and security community. It is also the collective opinion of the cryptography professionals whose names appear below. The behavior you highlight is a measured tradeoff that poses a remote threat  in return for real benefits that help keep users secure, as we will discuss in a moment.

Your story has been reported widely around the world. For example, it was picked up by the Turkish media, including what remains of its dissident press. The story was carried in Turkey’s largest opposition newspaper, using your phrasing and paired with a statement by the head of Turkey’s internet administrative body–which oversees all the censorship and surveillance decisions–who quickly jumped to frame WhatsApp as unsafe. The message heard by activists, journalists and ordinary people around the world was clear: WhatsApp has a backdoor, it’s insecure, don’t use it.

Since the publication of this story, we’ve observed and heard from worried activists, journalists and ordinary people who use WhatsApp, who tell us that people are switching to SMS and Facebook Messenger, among other options–many services that are strictly less secure than WhatsApp.

You might argue that your article recommended switching to Signal, so why aren’t all the users switching to Signal? But information security does not happen in a vacuum. The trade-offs that you falsely characterized as a backdoor make WhatsApp more reliable and usable, characteristics that ultimately keep people within the safety of the  app.

Here’s the crux of the matter: Signal and WhatsApp–which share the same protocol designed by the same team at Open Whisper Systems–both have to decide what to do when the recipient of an undelivered message changes phones or SIM cards. How should the app deal with the fact that there is now a new phone and SIM card, hence new keys? There is no avoiding this question. Every app must make a choice; there is no simple answer.

WhatsApp sends along the undelivered message, and if you have notifications for this event turned on, it informs you after it sends the message that the recipient has a new phone.

Signal does the opposite: it blocks the message from being sent until after you confirm that you are okay with the fact that the recipient has a new phone.

One can debate these trade-offs, but each is appropriate for the user base that WhatsApp and Signal serve. (EFF has useful guides for how to choose and make communication apps more secure; see here and here – keeping in mind that very few people are the kind of “high risk” users that EFF refers to–and such users have very different considerations than the general public).

WhatsApp’s behavior increases reliability for the user. This is a real concern, as ordinary people consistently switch away from unreliable but secure apps to more reliable and insecure apps. The imagined attack on WhatsApp, on which your reporting is based, is a remote scenario requiring an adversary capable of many difficult feats. Even then, the threat would involve only those few undelivered messages, if they exist at all, between the time the recipient changes their phone and the user receives a warning.

In the full scheme of things, this is a small and unlikely threat. The preconditions of the attack (which is not a backdoor) would in practice mean that the attacker had many other ways of getting at their target.

You might ask: What’s the harm? Why doesn’t WhatsApp just use better settings? Why don’t people just switch to Signal? If your reporters had taken the time to do the research, these questions could be answered.

Signal is well-designed.  Many in the security community use and consistently recommend it. However, the very thing that makes Signal a recommendation for people at high risk—that it drops messages at any sign of hiccup—prevents a large number of ordinary people from adopting it. Our community has used Signal for a long time, and have been trying to convert people to it, but its inevitable delivery failures (some by design, to keep users safer, and some due to bandwidth or other issues) mean that we often cannot convince people to use it despite spending a lot of effort trying to convince them—even people who have a lot at stake.

The reason people, including journalists and activists, use WhatsApp over Signal isn’t because people are flaky, but because in the real world, reliability, usability and a large user base are key to security. Activists and journalists communicate a lot with ordinary people, and need to be certain that their messages are communicated as reliably as possible, using the same system as their recipient will use–hence the advantage of WhatsApp with its huge user base.

WhatsApp’s behavior around key exchange when phone or SIM cards are changed is an acceptable trade-off if the priority is message reliability. People do not have a free choice in what apps to use; they gravitate towards ones with the largest user base (the ones the people they want to connect to are using) and to ones that are seamless to use. Causing unnecessary and unwarranted concern about WhatsApp is likely to make many users give up on the idea of using secure apps altogether. Again, think of causing alarmist doubts over vaccines in general because of a very rare threat of side effects to a few

It might seem the answer is to add warnings, since more information is always better. But giving people an overload of warnings they do not understand does not enhance security; instead, people will stop using such apps and switch to reliable, warning-free but unsafe options like SMS. Or, alternatively, they just click “yes” to whatever dialogue pops up, making warnings even less useful. This is called “alarm fatigue” or “warning overload” or “dialogue fatigue”–indiscriminate floods of warnings actually decreases safety and security. These are well-researched areas. We are familiar with this consideration from cockpit design to health care devices and also in information security. Google itself conducts scientific research around effective and minimalistic security communications.

There is a reasonable debate about which alarms should be on by default and how they should be deployed, but it is a perfectly defensible security consideration to say that an app with a billion-plus user base cannot turn on all possible warnings by default.

WhatsApp effectively protects people against mass surveillance. Individually targeted attacks by powerful adversaries willing to put effort into compromising a single person are a different kind of threat. If that is the threat model in mind, then merely recommending Signal is irresponsible. Your reckless, uncontextualized piece posits a mythical Snowden-type character, with a powerful, massively resourced adversary, for whom WhatsApp would not be a good choice. From that it concludes that WhatsApp is unsafe for a billion people for whom it is, at the moment, among the best options for secure communication.

To further complicate things, switching to Signal may not be advisable in some settings, because it marks you as an activist. There are many threat models under which WhatsApp is the safest option, and there are reports of people around the world being jailed merely for having installed an encryption app. It’s fine to recommend Signal and to broaden its user base. It’s not fine to fearmonger  and scare people away from WhatsApp (which runs the same protocol as Signal) because of a minor and defensible difference in the kind of warnings it gives and the blocking behavior of a few undelivered messages when someone changes phones or SIM cards.

If you had contacted independent security researchers, many of whom, including the EFF, have written pieces calling your story irresponsible, they could have explained the issue to you and suggested how to report it responsibly. Your story notably lacks quotes, responses, or explanations by security experts in the field. Instead, it hinges on the claims of a single well-meaning graduate student. It’s important to recognize the tendency of many security researchers, especially inexperienced ones, to overestimate the practical impact of vulnerabilities they find, and being ignorant how security needs play out in the real world, the massive amount of research on actual user behavior in the face of friction and warnings, and the need for independent verification and context.

The proper way to report this story would have been to say:  “Here’s a difficult attack that could allow a sophisticated, resourceful adversary willing to invest a good deal of effort, some components of which have never been demonstrated in practice, to read a few messages that had been sent but have not yet been read after events like the intended recipient changing phones or SIM cards. This issue is only of concern during that case: messages sent but not yet read. If you are concerned, you should change this setting on WhatsApp. Beware that even if you change this setting, you will only get the warning after the delayed message is sent. If this is an unacceptable risk to you, switch to Signal if its smaller user base is workable for you and does not pose other threats to you. In fact, if even this is a big enough risk for you, you should contact the EFF or a similar organization directly as we obviously cannot give you advice via a general newspaper article, but just wanted to give everyone a general heads up.”

To recap:

1: The WhatsApp behavior described is not a backdoor, but a defensible user-interface trade-off. A debate on this trade-off is fine, but calling this a “loophole” or a “backdoor” is not productive or accurate.

2: The threat is remote, quite limited in scope, applicability (requiring a server or phone number compromise) and stealthiness (users who have the setting enabled still see a warning–even if after the fact). The fact that warnings exist means that such attacks would almost certainly be quickly detected by security-aware users. This limits this method.

3: Telling people to switch away from WhatsApp is very concretely endangering people. Signal is not an option for many people. These concerns are concrete, and my alarm is from observing what’s actually been happening since the publication of this story and years of experience in these areas.

4: You never should have reported on such a crucial issue without interviewing a wide range of experts. The vaccine metaphor is apt: you effectively ran a “vaccines can kill you” story without interviewing doctors, and your defense seems to be, “but vaccines do kill people [through extremely rare side effects].”

Unfortunately the damage is done, and it is profound. People’s lives and safety are at stake.

I recommend retracting the story, issuing an apology, and publicizing the fact that the attack is very hard, the threat is tiny (and only to a few messages caught in transit if they exist at all) and a warning mechanisms (even if after-the-fact) exist so that even if this unlikely threat occurred, people would know about it.

I also strongly recommend taking steps to ensure that reporters do not report on safety-critical technical subjects like this again without consulting experts.

Like many others, I have great respect for the Guardian, which is why the harm from this is so real: People believe that you perform due diligence on matters critical to their lives and safety.

Considering the stakes, security reporting must be measured and well-researched.. My unfortunate prediction is that the harm from your story will be real, widespread, and corrections and rebuttals likely minimally reported on.

Yours in deep disappointment,

Zeynep Tufekci

Associate professor & Writer

Author of forthcoming book: Twitter and Tear Gas: The Power and Fragility of Networked Protests

Signing In Support of Better Reporting of Security:

(Affiliations are for identification and qualification purposes only and do not represent nor bind the institutions named).

Matthew Green (Professor of cryptography at Johns Hopkins)

Bruce Schneier (Cryptographer, Fellow, Berkman Center for Internet and Society at Harvard University)

Isis Lovecruft (Core Developer, The Tor Project)

Matt Blaze (Cryptographer, Computer Science Professor, University of Pennsylvania)

Thaddeus T. ‘Grugq’ (Security Researcher)

Eva Galperin (Director of Cybersecurity, Electronic Frontier Foundation)

Emin Gün Sirer (Professor of Computer Science, Cornell University)

Steven Bellovin (Percy K. and Vida L.W. Hudson Professor of Computer Science at Columbia University)

Jackie Stokes (Global director of incident response at Intel Security)

Tony Arcieri (Security Engineer, Chain)

Avi Rubin (Professor of Computer Science at Johns Hopkins University)

Deirdre Connolly (Software Security Engineer)

Laurens (lvh) Van Houtven (Cryptographer, founder at Latacora)

Philipp Jovanovic (Cryptographer)

Zaki Manian (Cofounder, Skuchain)

Chris Palmer (Security Engineer, Google)

Aaron (azet) Zauner (Security Researcher, Independent)

Filippo Valsorda (Cryptography Engineer, Cloudflare)

Thomas H. Ptacek (Latacora)

Scott Arciszewski (CDO, Paragon Initiative Enterprises)

George Tankersley (Cryptography Engineer, Cloudflare)

David Adrian (Security Researcher, University of Michigan)

Heather Mahalik (Forensic security researcher)

Tom Ritter (Security Engineer, Mozilla)

Nicholas Weaver (Security researcher, International Computer Science Institution and the University of California at Berkeley).

Stephen Checkoway (Assistant Professor, University of Illinois at Chicago)

Cynthia Taylor (Clinical Assistant Professor, University of Illinois at Chicago)

Katie Moussouris (Luta Security)

Christina Morillo (Information Security Professional, Previous: Morgan Stanley)

Nick Sullivan (Cryptography Engineer, Cloudflare)

Sarah Clarke (Author Infospectives blog and Privacy Architect at Privasee Ltd)

Joseph Lorenzo Hall (Chief Technologist, Center for Democracy and Technology)

Katherine McKinley (Staff Security Engineer, Mozilla)

Kenneth White (Director, Open Crypto Audit Project)

Jonathan Zdziarski (Forensic scientist, security researcher and O’Reilly author),

J.C. Jones (Cryptographic Engineering, Mozilla)

Gillian (Gus) Andrews (Security usability researcher)

David Cash (Assistant Professor of Computer Science at Rutgers University)

Yan Zhu (Technology Fellow at EFF / Security Engineer at Brave)

Stefano Tessaro (Cryptographer, Assistant Professor at University of California Santa Barbara)

Ryan Hurst (Public Trust Services, Product Management, Google)

William Budington (Security Engineer, Electronic Frontier Foundation)

Alec Muffett (Security Researcher)

Nicolas Christin (Associate Research Professor, School of Computer Science, and Engineering & Public Policy, Carnegie Mellon University)

Chris Kanich (Assistant Professor, University of Illinois at Chicago)

Diego F. Aranha (University of Campinas)

Bart Preneel (Professor, KU Leuven, COSIC)

Peter Honeyman (Research Professor of Computer Science and Engineering, University of Michigan)

Collin Mulliner (Mobile Security Researcher, co-author Android Hacker’s Handbook)

Will Strafach “Mobile Security Analyst and Former iOS Jailbreaker”

Eric Mill (18F)

Richo Healey (Senior Security Engineer, Stripe)

Tim Taubert ‏ (Security Engineer, Mozilla)

Stephan Somogyi (Security and Privacy Product Management, Google)

Joshua Kroll (Computer Scientist, security & encryption, Cloudflare)

Robert Jorgensen (Cybersecurity Program Director/Assistant Professor, Utah Valley University)

Stefano Zanero, Associate Professor, Politecnico di Milano

Brendan Dolan-Gavitt (Assistant Professor, Computer Science and Engineering, NYU School of Engineering)

Jon Callas (Cryptographer, co-founder of PGP Corp, Silent Circle)

James Nettesheim (Security Engineer, Google)

Morgan Marquis-Boire (Director of Security @FirstLookMedia)

Geoffrey King (Civil liberties attorney, University of California at Berkeley)

Riana Pfefferkorn (Cryptography Fellow, Center for Internet and Society, Stanford Law School)

Tom Lowenthal (Staff Technologist, Committee to Protect Journalists)

“Martin Shelton, (Security User Researcher, soon at Google)

Cory Scott (Chief Information Security Officer, LinkedIn)

Paulo Barreto (University of Washington Tacoma, University of São Paulo)

Justin Troutman (Cryptographer, PM, Freedom of the Press Foundation)

Garrett Robinson (CTO, Freedom of the Press Foundation)

Christian Ternus (Security architect; Akamai)

Daira Hopwood, (Security and cryptographic engineer, Zcash Company)

[NOTE: A sentence in the penultimate paragraph had accidentally gotten reverted to an older version; I’ve updated it back to the correct version as initially published. -z]