Recent SPDY news comes from some big brands: Twitter, Mozilla, Amazon, Apache, Google.
- Mar, 2012
Twitter now supports SPDY
- Feb, 2012
A draft of the SPDY specification is submitted to the http-bis working group for consideration toward the HTTP/2.0 effort.
- Feb, 2012
Firefox 11 now supports SPDY
- Jan, 2012
Google publishes SPDY Tech Talk, Announces that SPDY with SSL is now faster than HTTP without SSL for Google sites.
- Dec, 2011
Apache mod-spdy enters beta
- Nov, 2011
Amazon launches Kindle Silk browser with SPDY support
Looking forward to seeing what comes next!
3 thoughts on “SPDY Momentum Fueled by Juggernauts”
I like the idea of SPDY gateways deployed at the ISP but doesn’t that mean it will need to act as a man-in-the-middle of the SSL session? Ideas on how to work around this?
Hi, Peter –
No, it doesn’t mean this at all. Security has been a second class citizen for HTTP since its beginnings. All of the caching problems which people sometimes attribute to SPDY, are really just problems that weren’t solved with SSL years ago. But – good news – they could have been.
What happened was that we allowed the world to become dependent on “transparent proxies”. These are proxies which sit between you and the net and can do caching or filtering or other tricks to look at your network traffic. Of course, these transparent proxies only worked because the web is fundamentally insecure. Today, these transparent proxies have proliferated like crazy. But Google, Twitter, Facebook, and others are already dedicated to moving to 100% SSL, all the time, with or without SPDY. As these sites move to SSL, the transparent proxies won’t be able to see the content. Companies attempting to prevent data leakage through these encrypted sites *already* has to come up with a solution, and the solutions are ugly:
a) disallow your users from accessing the SSL-enabled site
b) require your users to get special permission (white list) for accessing the SSL-enabled site
c) allow the access, but lose the ability to filter it/inspect it/cache it/etc
d) use SSL MITM – a VERY expensive process that requires all clients to load a special CA root cert on their desktops, phones, and other network devices.
But there is a better solution! The problem is the “transparent” proxies. I presented this idea to the IETF yesterday, and we could implement this today. Browsers need to enable SSL-connectivity to a trusted, explicit proxy. The browser will do SSL to the proxy, which will terminate the SSL and then do SSL to the origin server. The proxy can then do all the filtering/caching/value add that the old transparent proxy did. But this new type of proxy is *way* better than the transparent proxy:
a) The user “opts in” to the proxy (this can be made seamless so users don’t have to know and it is easy to config/administer)
b) The security is better – you now have SSL both to the proxy and out of the proxy. No more eavesdropping from others.
c) It’s super easy to configure – no need to distribute fake SSL certificates which cause strange browser alerts to pop up.
d) You can honor HSTS for banking transactions so theyare still end-to-end secure
A year later I caught your reply to my question. I’m a big fan of your trusted, explicit proxy idea but it looks like the TLS working group is less keen on it. What are your current thoughts on how we can push this idea forward? Are you still an advocate?