It is the “S” in HTTPS that matters, and it doesn’t break the web.
Sir Tim Berners-Lee published some notes on design issues around securing the web.
http://www.w3.org/DesignIssues/Security-NotTheS.html
This a response to part 1 of those notes. Like Sir Tim’s note, it is a personal view only.
Sir Tim is concerned that moving the web to https: URLs will break the web, and is especially concerned that applications which take links-as-data and use them as links-to-data will be broken by things like strict insecure content blocking. He argues that the https: URL scheme was a design mistake that “confuses information about security levels with the identity of the resource”.
I disagree. https: is a different scheme and it implies a different meaning for resources. When you take something that is currently available at an http: URL and make it available at an https: URL, it’s no longer the same thing. It is a new thing, something that makes new promises to the user and to other resources that depend on it. If that new thing is broken because it has dependencies that don’t keep those promises, it’s not the case that the security-conscious browser vendors broke “back compatibility with old content”. What happened is that you didn’t finish the job of making your new thing.
Now, security engineers all over the place are trying their best to make that job easier. That someone as savvy as Sir Tim can entertain the misapprehension that http: and https: resources are semantically identical shows they were doing a pretty good job of that even back 20 years ago. But it’s actually difficult to make security happen reliably. There’s a reason lots of smart people work so hard at it and still fail all the time. It’s magical thinking to expect that we can just make everything all better without doing careful end-to-end engineering for every part of a system and understanding the abstractions we need and the promises they must keep.
If we weaken or make uncertain the guarantees of HTTPS for the millions of resources which have done the work, finished the job, and are keeping their promises for billions of users, in order that we might declare half-measures against incompetently-imagined adversaries as “secure”, well, that, I would argue, would be breaking the web.
TLS and HTTPS are not about “creating a separate space…in which only good things happen.” They are about abstracting away the security of the network so we can worry about other things, the same way that TCP/IP abstracts away things like packet routing, loss and reassembly. HTTPS makes it so you don’t have to worry about being spied on, phished, ad-injected, misdirected or exploited, whether you’ve got a loopback cable to your server, surf from a coffee shop, if your home router has malware, or if powerful government agencies have you in their sights.
But to succeed in fulfilling the promise of that abstraction, https: resources can’t depend on things that aren’t. If recent revelations of massive spying have shown us anything, it’s that our adversaries are incredibly capable. They will exploit any weakness and they can do so at scale. That’s why security is difficult. A tire with only one hole in it is still a flat tire. To explain the importance of a potentially incomplete abstraction like this in a non-security way: how useful would TCP be if only some dropped or re-ordered packets got retransmitted and reassembled correctly, and sometimes, you couldn’t tell when – it wasn’t really reliable. Would you use that? Should it even still be called TCP? An abstraction like this must be reliable to be useful – especially in an adversarial context.
Sir Tim laments that we couldn’t have “one web” with one scheme and a smooth upgrade of HTTP to be more secure, comparing it to the upgrade of, e.g. IPv4 to IPv6. But we did have exactly that – it just happened inside the https: scheme. We’ve evolved much better security (5 major protocol revisions from SSLv2 to TLS 1.2 and even things like SPDY and now QUIC Crypto, all over the same https: scheme) and it’s been pretty much perfectly smooth for the link structure of the web. The critical point about this evolution, however, is that it has always kept the promises of the network security abstraction while making them incrementally stronger. What we can’t smooth over is the distinction between network security and no/unreliable network security for applications and users that rely on these promises, because the semantic distinction between those resources and how they are used is real. (unlike IP versions, which have no semantic distinction for users)
We do have avenues available to us to make the web more secure by default. We can redirect from insecure to secure schemes. We can build clients that optimistically attempt secure connections, and we can let servers give clients hints about when to do that. We can even optimistically encrypt HTTP without explicitly promising users or other resources relying on them that anything meaningful has happened, and hope that it raises costs for our adversaries enough that they give up.
We can silently raise the bar for resources that make no promises. But we cannot undo the promises security-critical applications have been making to their users, and which those users depend on, and we shouldn’t claim they are “breaking the web” or a threat to it. There’s a reason why so many of the most important information services users interact with on the modern web are HTTPS only. Those promises don’t break the web – they enable the trust that makes the web possible.
-Brad Hill