Certificate Validation Example: Facebook

Most people know the concepts of SSL, but not the gory details.  By using Facebook as a walkthrough example, I’m going to discuss how it works from the browser’s viewpoint, and how it impacts latency to your site.  BTW, this is not intended as a criticism of Facebook – they’re doing all the right things to make sure your data is encrypted and authenticated and fast.  The failures highlighted here are failures of a system that wasn’t designed for speed.

Fetching the Certificate
When you first connect to a SSL site, the client and server use the server’s public key to exchange a secret which will be used to encrypt the session.  So the first thing the client needs to do is to get the server’s public key.  The public key is sent as part of the SSL Server Hello message.   When we look at the Server Hello Message from Facebook, we see that it sent us a Certificate which was 4325 bytes in size.  This means that before your HTTP request even gets off your computer, the server had to send 4KB of data to the client.  That’s a pretty big bundle, considering that the entire Facebook login page is only 8.8KB.  Now, if a public key is generally only 1024 or 2048 bits, with elliptic curve keys being much smaller than that, how did Facebook’s certificate mushroom from 256 to 4325 bytes?  Clearly there is a lot of overhead.  More on this later.

Trusting the Certificate
Once the browser has the server’s certificate, it needs to validate that the certificate is authentic.  After all, did we really get Facebook’s key? Maybe someone is trying to trick us.  To deal with this, public keys are always transferred as part of a certificate, and the certificate is signed by a source, which needs to be trusted.  Your operating system shipped with a list of known and trusted signers (certificate authority roots).  The browser will verify that the Facebook certificate was signed by one of these known, trusted signers.  There are dozens of trusted parties already known to your browser.  Do you trust them all? Well, you don’t really get a choice.  More on this later.

But very few, if any, certificates are actually signed by these CA’s.  Because the Root CA’s are so important to the overall system, they’re usually kept offline to minimize chances of hackery.  Instead, these CAs periodically delegate authority to intermediate CAs, when then validate Facebook’s certificate.  The browser doesn’t care who signs the certificate, as long the chain of certificates ultimately flows to a trusted root CA.

And now we can see why Facebook’s Certificate is so large.  It’s actually not just one Certificate – it is 3 certificates rolled into one bundle:

The browser must verify each link of the chain in order to authenticate that this is really Facebook.com.

Facebook, being as large as they are, would be well served by finding a way to reduce the size of this certificate, and by removing one level from their chain.  They should talk to DigiSign about this immediately.

Verifying The Certificate
With the Facebook Certificate in hand, the browser can almost verify the site is really Facebook.  There is one catch – the designers of Certificates put in an emergency safety valve.  What happens if someone does get a fraudulent certificate (like what happened last month with Comodo) or steal your private key?  There are two mechanisms built into the browser to deal with this.

Most people are familiar with the concept of the “Certificate Revocation List” (CRL).  Inside the certificate, the signer puts a link to where the CRL for this certificate would be found.  If this certificate were ever compromised, the signer could add the serial number for this certificate to the list, and then the browser would refuse to accept the certificate. CRLs can be cached by the operating system, for a duration specified by the CA.

The second type of check is to use the Online Certificate Status Protocol (OCSP).  With OCSP, instead of the browser having to download a potentially very large list (CRL), the browser simply checks this one certificate to see if it has been revoked.  Of course it must do this for each certificate in the chain.  Like with CRLs, these are cacheable, for durations specified in the OCSP response.

In the Facebook.com example, the DigiCert certificates specify an OCSP server.  So as soon as the browser received the Server Hello message, it took a timeout with Facebook and instead issued a series of OCSP requests to verify the certificates haven’t been revoked.

In my trace, this process was quick, with a 17ms RTT, and spanning 4 round-trips (DNS, TCP, OCSP Request 1, OCSP Request 2), this process took 116ms.  That’s a pretty fast case.  Most users have 100+ms RTTs and would have experienced approximately a ½ second delay.  And again, this all happens before we’ve transmitted a single byte of actual Facebook content.  And by the way, the two OCSP responses were 417 bytes and 1100 bytes, respectively.

Oh but the CDN!
All major sites today employ Content Delivery Networks to speed the site, and Facebook is no exception.  For Facebook, the CDN site is “static.ak.facebook.com”, and it is hosted through Akamai. Unfortunately, the browser has no way of knowing that static.ak.facebook.com is related to facebook.com, and so it must repeat the exact same certificate verification process that we walked through before.

For Facebook’s CDN, the Certificate is 1717 bytes, comprised of 2 certificates:

Unlike the certificate for facebook.com, these certificates specify a CRL instead of an OCSP server.  By manually fetching the CRL from the Facebook certificate, I can see that the CRL is small – only 886 bytes. But I didn’t see the browser fetch it in my trace.  Why not?  Because the CRL in this case specifies an expiration date of July 12, 2011, so my browser already had it cached.  Further, my browser won’t re-check this CRL until July, 4 months from now.  This is interesting, for reasons I’ll discuss later.

Oh but the Browser Bug!
But for poor Facebook, there is a browser bug (present in all major browsers, including IE, FF, and Chrome) which is horribly sad.  The main content from Facebook comes from www.facebook.com, but as soon as that page is fetched, it references 6 items from static.ak.facebook.com.  The browser, being so smart, will open 6 parallel SSL connections to the static.ak.facebook.com domain. Unfortunately, each connection will resend the same SSL certificate (1717 bytes).  That means that we’ll be sending over 10KB of data to the browser for redundant certificate information.

The reason this is a bug is because, when the browser doesn’t have certificate information cached for facebook.com, it should have completed the first handshake first (downloading the certificate information once), and then used the faster, SSL session resumption for each of the other 5 connections.

Putting It All Together
So, for Facebook, the overall impact of SSL on the initial user is pretty large.  On the first connection, we’ve got:

  • 2 round trips for the SSL handshake
  • 4325 bytes of Certificate information
  • 4 round trips of OCSP validation
  • 1500 bytes of OCSP response data

Then, for the CDN connections we’ve got:

  • 2 round trips for the SSL handshake
  • 10302 bytes of Certificate information (1717 duplicated 6 times)

The one blessing is that SSL is designed with a fast-path to re-establish connectivity.  So subsequent page loads from Facebook do get to cut out most of this work, at least until tomorrow, when the browser probably forgot most of it and has to start over again.

Making it Better

OCSP & CRLs are broken
In the above example, if the static.ak.facebook.com keys are ever compromised, browsers around the planet will not notice for 4 months. In my opinion, that is too long.  For the OCSP checks, we cache the result for usually ~7 days.  Having users exposed to broken sites for 7 days is also a long time.  And when Comodo was hacked a month ago, the browser vendors elected to immediately patch every browser user on the planet rather than wait for the OCSP caches to expire in a week.  Clearly the industry believes the revocation checking is broken when it is easier to patch than rely on the built-in infrastructure.

But it is worse than that.  What does a browser do when if the OCSP check fails?  Of course, it proceeds, usually without even letting the user know that it has done so (heck, users wouldn’t know what to do about this anyway)!   Adam Langley points this out in great detail, but the browsers really don’t have an option.  Imagine if DigiCert were down for an hour, and because of that users couldn’t access Facebook?  It’s far more likely that DigiCert had downtime than that the certificate has been revoked.

But why are we delaying our users so radically to do checks that we’re just going to ignore the result of if they fail anyway?  Having a single-point-of-failure for revocation checking makes it impossible to do anything else.

Certificates are Too Wordy
I feel really sorry for Facebook with it’s 4KB certificate.  I wish I could say theirs was somehow larger than average.  They are so diligent about keeping their site efficient and small, and then they get screwed by the Certificate.  Keep in mind that their public key is only 2048bits. We could transmit that with 256B of data.  Surely we can find ways to use fewer intermediate signers and also reduce the size of these certificates?

Certificate Authorities are Difficult to Trust
Verisign and others might claim that most of this overhead is necessary to provide integrity and all the features of SSL.  But is the integrity that we get really that much better than a leaner PGP-like system?  The browser today has dozens of root trust points, with those delegating trust authority to hundreds more.  China’s government is trusted by browsers today to sign certificates for google.com, or even facebook.com.  Do we trust them all?

A PGP model could reduce the size of the Certificates, provide decentralization so that we could enforce revocation lists, and eliminate worries about trusting China, the Iranian government, the US government, or any dubious entities that have signature authority today.

Better Browser Implementations
I mentioned above about the flaw where the browser will simultaneously open multiple connections to a single site when it knows it doesn’t have the server’s certificate, and thus redundantly download potentially large certs.  All browsers need to be smarter.
Although I expressed my grievances against the OCSP model above, it is used today.  If browsers continue to use OCSP, they need to fully implement OCSP caching on the client, they need to support OCSP stapling, and they need to help push the OCSP multi-stapling forward.

SSL Handshake Round Trips
The round trips in the handshake are tragic.  Fortunately, we can remove one, and Chrome users get this for free thanks to SSL False Start.  False Start is a relatively new, client-side only change.  We’ve measured that it is effective at removing one round trip from the handshake, and that it can reduce page load times by more than 5%.

Hopefully I got all that right, if you read this far, you deserve a medal.

Chrome vs IE9 JavaScript

Here are some results of benchmarking on my home computer.  It shows that Chrome is still much faster than IE9 on all major JavaScript benchmarks.  The IE 32bit version is a little better, but Win7 64bit is outselling Win7 32bit by a 3:1 margin, so this is what many users will experience.

image

image

image

My system is a Intel Core 2 Duo E6550 @2.33GHz with 4GB of RAM running Win7 Build 7600.  The version of IE tested was 9.0.8080.16413 64bit and the version of Chrome was 10.0.648.204.

How To Seriously Balance the Budget

balance Our legislators all claim they want to spend less.  But every time they attempt a plan for fiscal responsibility, they get sidetracked on who-wants-to-cut-what.

If they are serious about our finances, they have to stop debating which programs to cut (Planned Parenthood, Medicare, the Military, etc), and instead focus on the budget itself.   This means unilateral cuts, blind to the programs, and  simply trim everything equally.

Here is my simple proposal to balance the budget over 8 years (2013 to 2020):

  1. Federal Spending in 2010 was ~$3.5T with revenues of ~$2.1T.  To make these changes in 2013-2020,  we need to cut $1.4T per year.  We will accomplish this by reducing spending by $175B annually, additively.
  2. An annual spending cap is defined as:
                           ($3.5T – $175B * (Year – 2012))
    1. 2013 = $3.325T
    2. 2014 = $3.150T
    3. 2015 = $2.975T
    4. 2016 = $2.800T
    5. 2017 = $2.625T
    6. 2018 = $2.450T
    7. 2019 = $2.275T
    8. 2020 = $2.100T
    9. > 2020 : The cap is set to the prior year’s revenue.
  3. Each year, the budget dictates the proportion of money spent for each line item.  If the total budget exceeds the values from (2), spending per line item will be reduced proportionally such that total spending equals the value of line (2).
  4. These spending cuts are mandatory, and override any previously guaranteed benefits to any programs.  All federal programs will need to adjust to the new spending caps.
  5. If, in any year, there is a surplus in revenues, the spending caps outlined in (2) will remain, and the surplus will be used to reduce the overall debt.
  6. After the year 2020, annual spending will be capped at the total revenue of the previous year, and the budget balancing process of line (3) will be applied with the new spending cap.

This solution removes the politically charged plans where our legislators always get tripped up.  This is liberating for all legislators, as they can now focus on getting the job done and balancing the budget in 8 years.  No problem, right?

The Era of Browser Preconnect

I was playing around on WebSiteTest today – trying out it new IE9 test feature, and I noticed something new that IE9 does: preconnect.

What is preconnect?  Preconnect is making a connection to a site before you have a request to use that connection for.  The browser may have an inkling that it will need the connection, but if you don’t have a request in hand yet, it is a speculative request, and therefore a preconnect.

IE9 isn’t the first to use preconnect, of course. Chrome has been doing preconnect since ~Chrome7.  So it is nice to see other browsers validating our work.  But, IE9 is the first browser I know of which appears to preconnect right out of the gate, without any data about a site.  Chrome, on the other hand, will only preconnect based on data it has learned by observing network activity through repeat visits to a site.  As such, Chrome usually is issuing the same number of connects and network traffic, just with less delay. 

Observations

Here is the trace where I first noticed this behavior on WebPageTest.  Notice that WebPageTest did not record any connect-time delay on the second request to dev.chromium.org.  How can this be?  Because the socket was already connected.

ie.preconnect

To understand this better, I then opened up WireShark and watched the packets.  The packet trace clearly shows that IE9 simply opens 2 connections, back to back, for every domain the browser connects to.  This isn’t a horrible assumption for the browser to make – since many sites will indeed require more than one connection per domain already.

Some Wastefulness

But it also wasn’t hard to notice cases where it connects wastefully.  On belshe.com, for instance, there is a single link to a YouTube video requiring only one resource.  IE9 opens two connections to YouTube anyway (WebPageTest doesn’t show the unused connection in its trace, by the way, but it is in the trace!).  One connection loads the image, the other connection is wasted.  YouTube diligently kept that connection open for 4 minutes too!  There are also a couple of 408 error responses from Akamai – it appears that the Akamai server will send a graceful 408 error response to an empty connection after some period of time.

But is this a problem?

As long as the level of accidental connects is minimal, probably not.  And much of the time, 2 connections are useful. It would be great to hear from the IE9 team about their exact algorithm and to see if they have data as to how much extra resources they are using? 

WebPageTest already offers some clues.  For belshe.com, for example, I can see that IE8 uses 20 connections, while IE9 is now using 23 connections to render the page.  10% overhead is probably not the end of the world.

What about SSL?

I love SSL, so of course this got me wondering about what IE9 does for preconnecting https sites too.  Sure enough, IE9 happily preconnects SSL too.  [Sadly – it even forces the server to do two full SSL handshakes- wastefully generating 2 session-ids.  This is a bit more annoying, because that means the main site was just put through double the number of PKI operations.  Fortunately, PKI operations are relatively cheap these days.  I’d complain more, but, tragically, Chrome is not much better yet.  Did I mention that SSL is the unoptimized frontier?]

What Would Brian Boitano Chrome Do?

As I mentioned, Chrome has been doing preconnect for some time.  But, Chrome doesn’t preconnect right out of the gate.  We were so worried about this over-connecting business that we added gloms of more complicated code highly sophisticated, artificial intelligence before turning it on at all 🙂

Specifically, Chrome learns the network topology as you use it.  It learns that when you connect to www.cnn.com, you need 33 resources from i2.cdn.turner.com, 71 resources from i.cdn.turner.com, 5 resources from s0.2mdn.net, etc etc.  Over time, if these patterns remain true, Chrome will use that data to initiate connections as soon as you start a page load.  Because it is based on data, we hope and expect that it will much less often connect incorrectly.  In fact, it should be making the same number of connections, just a little earlier than it otherwise would.  But all of this is an area that is under active research and development.  (By the way, if you want to see how Chrome works, check out the ultra-chic-but-uber-geek “about:dns” page in your Chrome browser)

So does all this fancy stuff make my Internet faster?

Fortunately, we have pretty good evidence that it does.  We’re monitoring this all the time, and I’d say this work is still in its infancy.  But here is some data from Chrome’s testing in 2010.

Our Chrome networking test lab has a fleet of client machines (running Linux), a simulated network using dummynet (see Web-Page-Replay for more information), and some fast, in-memory servers.  We record content from the top-35 websites, and can play it back repeatedly with high fidelity.  Then we can change the network configuration and browser configuration and see how it all works out. 

In my test, I picked 4 different network configurations.  I then varied the RTT on each connection from 0 to 200ms.

Here is a graph of page load time (PLT) improvements in this test.

preconnect.improvement

Overall, we’re pretty happy with this result.  When the RTT is zero, preconnect doesn’t really help, because the connections are basically free (from a latency perspective).  But for connections with RTTs greater than ~50ms, we see a solid 7-9% improvement across the board.  (typical RTTs are 80-120ms)

The Larger Question

While this is great for performance now, I am worried about the wastefulness of HTTP on the Internet.  We used to only have one domain per site, now we have 7.  We used to have only 2 connections per domain, but now we have 6.  And on top of that, now we’re preconnecting when we don’t even need to?

With it’s proliferation of TCP connections, HTTP has been systematically sidestepping all of TCP’s slow start and congestion controls.  Sooner or later will the internet break?  Throwing inefficiency at the problem can’t go on forever.

So one last note – a blatant plug for SPDY

The downside of preconnect is a big part of why we’re working on SPDY.  HTTP has been nothing short of a heroic protocol and made the Internet as we know it possible.  But as we look to the next generation of rich media sites with low latencies, it is clear that today’s HTTP can’t perform at that level.

SPDY hopes to solve much of HTTP’s connection problems while also providing better performance and better security. 

With that, I guess I need to get back to work…

Buying Into The Carpool Lane

Scott Adams nailed me today:

RV-AB441_ADAMS__G_20110128010821 

Living in the Bay Area, I’m anything except “rich”.  But I guess according to most standards, I’m in the top few percent.  But I can’t remember how often I have said this to my friends:  “The only reason I got a hybrid was to buy my way into the carpool lane.”

And yeah, it is totally worth it!!!   But how much did our government make off my desire to drive faster than all the commoners?  $8.  Honda got the rest.  What a bunch of suckers.

H.264 – Who Holds the Patents?

H.264 is in the news because Google Chrome won’t support it natively and instead will use WebM, an allegedly open and free video alternative.

Who gets paid when you license H.264?  It’s managed by MPEG-LA.  They have a 70-page list of patents which allegedly contribute to H.264.  If you hear a complaint about Google, Mozilla, and Opera’s stance on not supporting H.264, consult this list to see if the complainer has a conflict of interest.

H.264 Patent Holders

Apple Inc.
Cisco Systems Canada IP Holdings Company†
The Trustees of Columbia University in the City of New York
DAEWOO Electronics Corporation
Dolby Laboratories Licensing Corporation
Electronics and Telecommunications Research Institute
France Télécom, société anonyme
Fraunhofer‐Gesellschaft zur Foerderung der angewandten Forschung e.V.
Fujitsu Limited
Hewlett‐Packard Company
Hitachi, Ltd.
Koninklijke Philips Electronics N.V.
LG Electronics Inc.
Microsoft Corporation
Mitsubishi Electric Corporation
Nippon Telegraph and Telephone Corporation
NTT DOCOMO, INC.
Panasonic Corporation†
Polycom, Inc.
Robert Bosch GmbH
Samsung Electronics Co., Ltd.
Sedna Patent Services, LLC
Sharp Corporation
Siemens AG
Sony Corporation
Tandberg Telecom AS
Telefonaktiebolaget LM Ericsson
Toshiba Corporation
Victor Company of Japan, Limited

Sigh. Lawyers Are The Problem

sign Every day I enter my workplace which proudly displays this sign.  But I see the same sign pretty much everywhere.  If you wish to put one at your own house, you can buy it at Amazon here.

But why do Californians pay to put the same worthless sign on every building?  How much does this useless warning cost us?  Has one of these signs ever helped anyone?  Has it deterred anyone from entering the building?  I see pregnant women walk right past these signs all the time.  Surely it is exaggerated?

Doing some research we find that this sign stems out of Proposition 65, passed in 1986.

25249.6. Required Warning Before Exposure To Chemicals Known to Cause Cancer Or Reproductive Toxicity. No person in the course of doing business shall knowingly and intentionally expose any individual to a chemical known to the state to cause cancer or reproductive toxicity without first giving clear and reasonable warning to such individual, except as provided in Section 25249.10.

So why do businesses post the signs?  Because if you don’t, and some squirrelly lawyer finds that there is a bad chemical in your building, they can sue you for $2500 per day!

Of course, the law intended to make sure that building owners disclose real hazards.  But, in practice this has just become a cover-your-ass sign.  Building owners don’t know what toilet cleaner might be considered hazardous now or in the future (the official list of hazardous chemicals is maintained by the Attorney General and contains over 850 chemicals), so to protect themselves from suits, they just post the disclosure.  So now we’ve plastered the same sign on every single building, regardless of whether or not there is any real risk.  And by putting it on every building, we’ve completely neutered any chance that the sign could ever have value.

But lawyers are making money on it.  Read here.  Or read about how they hire bounty hunters here

I used to only despise lawyers collectively, but not individually.  These days, however, I blame the individuals too.  If you aren’t part of the solution, you’re part of the problem.  And lawyers are not working to make our lives better.  If you are a practicing lawyer, and I ever meet you, know that in my core, I think you’re a bad person. You’re a cockroach and I want to step on you and watch your guts ooze on the sidewalk under my shoe.  Holding a law degree is shameful.

I Wish I Had 12 Fingers

On each limb, we have five fingers and toes,
Which are the source of my recent aging woes.
They make us humans think in base 10,
But if I had twelve, I’d be young again!

You see, they say age 40 is the border of old,
Regardless of whether you feel young or bold.
Cross that line, and you’re halfway finished,
The vitality of youth will soon diminish.
But in base twelve, I’m only thirty four,
With 8 more years and long before,
I hit that old and disheartening age,
Of back pain and memory loss that burdens the sage.

But I guess I’m lucky and should not curse,
For my situation could be much worse,
Eight fingers to some, might seem quite nifty,
But if I had only eight, I’d now be fifty!

Google Will Rue The Day It Invited the Gov’t to Net Neutrality

A few years ago, Google started poking the government to act on Net Neutrality.  The motive behind Google’s goal is well intentioned.  But practically, it is foolish.  We’re inviting a beast into our industry that is more devastating than any beast we’ve yet imagined.  Eventually, Google will come to oppose the very legislation that it helped create.

The main problem with Net Neutrality is that we don’t need it.  The market works.  There are many choices for network access today, and you can access your favorite sites from Amazon to Wikileaks from almost anywhere in America.  We have access to the internet at home, in libraries, in schools and at work.  Who is not getting access again?

For individuals, the real debate is that some people want more bandwidth to more remote areas and they want someone else to pay for it.  Steve Wozniak, the eccentric co-founder of Apple, was very clear about this.  He wants to live on a remote hill, pay $29/mo, and have everyone else be required to pay to run the cables to his secluded hide away for fast internet access.  Steve’s argument is not new.  Many people have made the same argument far more elegantly.  They claim it “costs too much” for the high speed links and that dialup is unreasonably slow, or that “there is only one provider in my area”, etc.  None of those arguments hold.  These very same people still have access through wireless, through dialup, at work, at school, at the library, and at about a half million Starbucks or McDonalds across the planet.  And their access grows every single day!  They just want it cheaper.

Finally, the most important part of net neutrality is ensuring that content is available to everyone.  (No, this doesn’t mean you should get to watch your “Family Guy” or your favorite TV show for free)  Most of us hold at least some fear that eventually a big company (Comcast, AT&T, or Verizon, or Google) will screw the little guy by using their monopoly to restrict content and maximize profits.  This fear is reasonable,  because censorship on a grand scale would be a horrible thing for all of us.  But it’s not happening, and there is no evidence of it happening any time soon.  Further, if it ever did happen, customers can and would revolt.  Competition provides us everything we need.

But our fears of corporations are grossly misplaced.  There is someone far more scary, with vastly greater power that we should fear – the US government.  There is simply no company that can wreck devastation at the scale of the US government. Who’s rules are more scary – Comcast’s rules (sorry to pick on you, Comcast!), which would only apply to those that pay Comcast money?  Or Uncle Sam’s rules?  And every 4 years we elect a new set of politicians.  Even if we trust the politicians today, what happens when we get into a war, or have a 9/11-type event, and suddenly a “temporary” cease of terrorist communications is required?  (Did we forget about the TSA already?)  Who’s the terrorist?  Is Wikileaks a terrorist?  Is Wikipedia?  What if you have a science blog about particle physics?  Can you be shut down too?  The government is what you should fear.  Not a piddly little Microsoft, Google, or Comcast. 

Ok, but why will Google rue this?

With continued prodding from Google and others, legislation will be passed, and today was a starting point.  Whatever they pass will be  costly to companies and will cause that cost burden to be passed on to customers like you and me.  Further, it will put America at a disadvantage in our global marketplace.  All to solve a problem that doesn’t exist.

The first problem they’ll create is that of cost.  Every law has enforcement, and that means we pay people to define the specific rules, monitor compliance with those rules, and enforce punishments for those that do not obey.  This will cost billions of dollars and be spread into the margins of every content provider and ISP in America.  Of course, those companies will pass the cost onto their customers.  This means the price of DSL, Cable, AOL, and Netflix will all rise.  (I still think costs are heading down overall, but they could decrease faster without net-neutrality)

Second, it will snowball from “fair access” into content filters (aka censorship).  Initially, it might include banning certain forms of pornography.  It might even seem like something you agree with.  But with each new regulation, our freedoms are diminished.  Then, we might enter into a particular international conflict “requiring” a ban on certain types of communications to keep us safe.  With the content filters in place, the variety and types of information we can publish and read diminishes, and it is all out of our control.   You can’t switch providers to escape the unfairness of it all.

Finally, remember that America is in a global marketplace.  If our legislators legislate too much, Internet companies will simply move out of the country, taking the jobs, the profits, and the tax revenues with them.  This has already happened – gambling is alive and well on the internet – it just runs out of Costa Rica, Antigua, and other disputable places – leaving consumers at risk while simultaneously sticking America with a bill to ensure that gambling doesn’t happen here.  How silly!  Now the government will need to block outside access, or credit card payments to certain areas in order to keep Americans safe from information.

Google’s mission is “to organize the world’s information and make it universally accessible and useful.”  But with our own Government censors and the massive costs created to enforce “net neutrality”, Google will find this mission impossible to accomplish.  And that is when Google will rue the day…

 

Note: This article solely represents the views of a far-too-opinionated software engineer, and does not represent the views of his employer in any way.