Google Will Rue The Day It Invited the Gov’t to Net Neutrality

A few years ago, Google started poking the government to act on Net Neutrality.  The motive behind Google’s goal is well intentioned.  But practically, it is foolish.  We’re inviting a beast into our industry that is more devastating than any beast we’ve yet imagined.  Eventually, Google will come to oppose the very legislation that it helped create.

The main problem with Net Neutrality is that we don’t need it.  The market works.  There are many choices for network access today, and you can access your favorite sites from Amazon to Wikileaks from almost anywhere in America.  We have access to the internet at home, in libraries, in schools and at work.  Who is not getting access again?

For individuals, the real debate is that some people want more bandwidth to more remote areas and they want someone else to pay for it.  Steve Wozniak, the eccentric co-founder of Apple, was very clear about this.  He wants to live on a remote hill, pay $29/mo, and have everyone else be required to pay to run the cables to his secluded hide away for fast internet access.  Steve’s argument is not new.  Many people have made the same argument far more elegantly.  They claim it “costs too much” for the high speed links and that dialup is unreasonably slow, or that “there is only one provider in my area”, etc.  None of those arguments hold.  These very same people still have access through wireless, through dialup, at work, at school, at the library, and at about a half million Starbucks or McDonalds across the planet.  And their access grows every single day!  They just want it cheaper.

Finally, the most important part of net neutrality is ensuring that content is available to everyone.  (No, this doesn’t mean you should get to watch your “Family Guy” or your favorite TV show for free)  Most of us hold at least some fear that eventually a big company (Comcast, AT&T, or Verizon, or Google) will screw the little guy by using their monopoly to restrict content and maximize profits.  This fear is reasonable,  because censorship on a grand scale would be a horrible thing for all of us.  But it’s not happening, and there is no evidence of it happening any time soon.  Further, if it ever did happen, customers can and would revolt.  Competition provides us everything we need.

But our fears of corporations are grossly misplaced.  There is someone far more scary, with vastly greater power that we should fear – the US government.  There is simply no company that can wreck devastation at the scale of the US government. Who’s rules are more scary – Comcast’s rules (sorry to pick on you, Comcast!), which would only apply to those that pay Comcast money?  Or Uncle Sam’s rules?  And every 4 years we elect a new set of politicians.  Even if we trust the politicians today, what happens when we get into a war, or have a 9/11-type event, and suddenly a “temporary” cease of terrorist communications is required?  (Did we forget about the TSA already?)  Who’s the terrorist?  Is Wikileaks a terrorist?  Is Wikipedia?  What if you have a science blog about particle physics?  Can you be shut down too?  The government is what you should fear.  Not a piddly little Microsoft, Google, or Comcast. 

Ok, but why will Google rue this?

With continued prodding from Google and others, legislation will be passed, and today was a starting point.  Whatever they pass will be  costly to companies and will cause that cost burden to be passed on to customers like you and me.  Further, it will put America at a disadvantage in our global marketplace.  All to solve a problem that doesn’t exist.

The first problem they’ll create is that of cost.  Every law has enforcement, and that means we pay people to define the specific rules, monitor compliance with those rules, and enforce punishments for those that do not obey.  This will cost billions of dollars and be spread into the margins of every content provider and ISP in America.  Of course, those companies will pass the cost onto their customers.  This means the price of DSL, Cable, AOL, and Netflix will all rise.  (I still think costs are heading down overall, but they could decrease faster without net-neutrality)

Second, it will snowball from “fair access” into content filters (aka censorship).  Initially, it might include banning certain forms of pornography.  It might even seem like something you agree with.  But with each new regulation, our freedoms are diminished.  Then, we might enter into a particular international conflict “requiring” a ban on certain types of communications to keep us safe.  With the content filters in place, the variety and types of information we can publish and read diminishes, and it is all out of our control.   You can’t switch providers to escape the unfairness of it all.

Finally, remember that America is in a global marketplace.  If our legislators legislate too much, Internet companies will simply move out of the country, taking the jobs, the profits, and the tax revenues with them.  This has already happened – gambling is alive and well on the internet – it just runs out of Costa Rica, Antigua, and other disputable places – leaving consumers at risk while simultaneously sticking America with a bill to ensure that gambling doesn’t happen here.  How silly!  Now the government will need to block outside access, or credit card payments to certain areas in order to keep Americans safe from information.

Google’s mission is “to organize the world’s information and make it universally accessible and useful.”  But with our own Government censors and the massive costs created to enforce “net neutrality”, Google will find this mission impossible to accomplish.  And that is when Google will rue the day…


Note: This article solely represents the views of a far-too-opinionated software engineer, and does not represent the views of his employer in any way.

Performance and the TLS Record Size

Today I ran into a problem with TLS (SSL) Record Sizes causing the performance of my site to be sluggish and slow.  The server was doing a good job of sending large messages down the client, and I am using a late-model version of the OpenSSL library, why would this happen?

HTTP and TLS both seem like streaming protocols.  But with HTTP, the smallest sized “chunk” you can send is a single byte.  With TLS, the smallest chunk you can send is a TLS record.  As the the TLS record arrives on the client, it cannot be passed to the application layer until the full record is received and the checksum is verified.  So, if you send large SSL records, all packets that make up that record must be received before any of the data can be used by the browser.

In my case, the HTTP-to-SPDY proxy in front of my webserver was reading chunks of 40KB from the HTTP server, and then calling SSL_Write() for as much of that data over SPDY (which uses SSL for now).  This meant that the client couldn’t use any of the 40KB until all of the 40KB was received.  And since 40KB of data will often incur round-trips, this is a very bad thing.

It turns out this problem surfaces more with time-to-first-paint than with overall page-load-time (PLT), because it has to do with the browser seeing data incrementally rather than in a big batch.  But it still can impact PLT because it can cause multi-hundred-millisecond delays before discovering sub-resources.

The solution is easy – on your server, don’t call SSL_Write() in big chunks.  Chop it down to something smallish – 1500-3000 bytes.  Here is a graph comparing the time-to-first paint for my site with just this change.  Shaved off over 100ms on the time-to-first-paint.


Gettys on Bufferbloat

Jim Gettys has a nice tale of what he calls ‘bufferbloat’.  Instinctively, it seems like bigger buffers should result in less packet loss.  As long as you can buffer it, the other guy doesn’t have to retransmit, right?  But that is not the way TCP works.  It’s going to retransmit if you don’t reply fast enough.  And if you clog the buffers, its going to take a long time before the endpoint can acknowledge the data.

One interesting anecdote to me (and it isn’t really a conclusion) is that the world’s love affair with Windows XP (which has an ancient TCP stack) may actually be helping the internet at large, even though the Vista TCP stack is measurably a better stack:

The most commonly used system on the Internet today remains Windows XP, which does not implement window scaling and will never have more than 64KB in flight at once. But the bufferbloat will become much more obvious and common as more users switch to other operating systems and/or later versions of Windows, any of which can saturate a broadband link with but a merely a single TCP connection.

Gettys did conclude that this was a problem for video downloads, which is something everyone is doing these days.  He’s not wrong, but real video services may not be as subject to this as it seems.  Video services live-and-die by bandwidth costs, so to preserve bandwidth costs, they avoid simply transmitting the whole video – instead they dribble it out manually, at the application layer.  If they depended on TCP for throttling, he’d be right, but I don’t think many large-scale video services work this way.  Need more data! 🙂

Anyway, a great read.

Free SSL Certificates

padlock Adam Langley slammed me today for using a self-signed cert on this site (, pointing out that there is no reason not to have a real certificate, especially when you can get them for free.

As usual, he is right, of course.  So I got myself a signed certificate from StartSSL.

Here are the step by step instructions.  You can do it too:

Chrome Speeding up SSL with SSL FalseStart

The latest releases of Chrome now enable a feature called SSL False Start.  False Start is a client-side change which makes your SSL connections faster.  As of this writing, Chrome is the only browser implementing it.  Here is what it does.

In order to establish a secure connection, SSL uses a special handshake where the client and server exchange basic information to establish the secure connection.  The very last message exchanged has traditionally been implemented such that the client says, “done”, waits for the server, and then the server says, “done”.  However, this waiting-for-done is unnecessary, and the SSL researchers have discovered that we can remove one round trip from the process and allow the client to start sending data immediately after it is done.

To visualize this, lets look at some packet traces during the handshake sequence, comparing two browsers:


Browser w/o FalseStart

83ms SEND Client Hello
175ms RECV Server Hello
           Server Hello Done
176ms SEND Client Key Exchange
           Change Cipher Spec
           Enc Handshake Msg
           HTTP Request
274ms RECV Enc Handshake Msg
           Change Cipher Spec
           Enc Handshake Msg
275ms RECV HTTP Response
84ms SEND Client Hello
173ms RECV Server Hello
           Server Hello Done
176ms SEND Client Key Exchange
           Change Cipher Spec
           Enc Handshake Msg

269ms RECV Enc Handshake Msg
           Change Cipher Spec
           Enc Handshake Msg
269ms SEND HTTP Request

524ms RECV HTTP Response

These two traces are almost identical.  Highlighted in red is the subtle difference.  Notice that Chrome sent the HTTP Request at time 176ms, which was a little more than one round-trip-time faster than the other browser could send it. 

(Note- it is unclear why the HTTP response for the non-FalseStart browser was 250ms late; the savings here is, in theory, just 1 round trip, or 83ms.  There is always variance on the net, and I’ll attribute this to bad luck)

Multiplicative Effect on Web Pages
Today, almost all web pages combine data from multiple sites.  For SSL sites, this means that the handshake must be repeated to each server that is referenced by the page.  In our tests, we see that there are often 2-3 “critical path” connections while loading a web page.  If your round-trip-time is 83ms, as in this example, that’s 249ms of savings – just for getting started with your page.  I hope to do a more thorough report on the effect of FalseStart on overall PLT in the future.

For more information on the topic, check out Adam Langley’s post on how Chrome deals with the very few sites that can’t handle FalseStart.