The Reserve Currency of the World Will Be Digital Currency

This week, PayPal announced it is leaving the Libra alliance.  While that may sound like a vote against Libra, it’s more likely a vote against the dollar.  PayPal was just approved as China’s first western payment processor, and its likely that PayPal is simply betting that China’s digital currency is more likely to win than Facebook’s.

Today, US Dollars are the world’s dominant reserve currency, representing 62% of all reserves, while the RMB accounts for a mere 2%.  Despite having the globe’s 2nd largest economy by GDP, China’s RMB has only became reserve currency 3 years ago in October, 2016 and it has grown from 0% to 2% in a very short time. 

Meanwhile, in digital currency, American politicians are fighting digital.  Democrats, such as Senator Maxine Waters, and Republicans, such as President Donald Trump, both see Facebook’s Libra as a threat to the US Dollar.  Trump goes so far as to say that all digital currencies are an affront to the USD.  At the same time, China’s leadership is ready to embrace digital currency and is accelerating projects to create a government-backed digital currency.  Who’s right here? America sees digital as a threat while China sees digital as its opportunity to break out.

Frankly, digital currencies are unstoppable.  We have a global economy, but we don’t have a global currency.  Digital currencies, which can move globally with zero friction and zero counterparty risk are the first opportunity we’ve had to build a truly global currency.  While the US should use its position as the dominant reserve to propel digital currencies forward, it instead sees it as a threat and is attempting to block it. But the US has no jurisdiction globally, and its competitors would love nothing more than to displace the dollar.  As such, China rightly recognizes that digital currency is just what it has been waiting for: a global distribution vehicle for RMB.

If the US can’t get its act together soon, it will lose.  Crypto-currency is the future. As Marc Andreesen once said, “software is eating the world.

Why I Would Not Participate in a MPC Wallet

The personal liabilities associated with multi-party computation (MPC) based wallets are so great I don’t see how I (or anyone) could ever participate in a MPC wallet.

No Accountability

The core problem with MPC is an architectural one. While MPC does create a mechanism whereby multiple people can each hold independent parts of a key to eliminate single points of failure, MPC fails to offer any accountability about who participated in the signing of a transaction.

Imagine you create a 4-of-7 MPC wallet with 7 people participating, and 4 required to authorize a transaction. What if, unbeknownst to you, 4 of the other people holding key parts in the MPC wallet decide to steal the money? Because MPC does not offer signature accountability, no one can be certain who participated in the transaction.  As such, even though you had nothing to do with the crime, you’re now a suspect, and it may take months or years to clear your good name.

Co-Signers Make MPC Even Worse

Vendors offering MPC services and co-signing dismiss this vulnerability and claim, “don’t worry, we keep track of who participated and will log all accesses to the signing process”.  In other words, even though no one can determine who participated in the transaction from the signature itself, the vendors claim that they know the answer within their application logs. Thinking about this carefully, you’ll realize this makes the vulnerability even more severe.

With the vendor as a co-signer, you can now imagine the same attack scenario as above where 4 of the other participants on the wallet collude to steal the money.  In this case, however, imagine one of the perpetrators is a rogue employee at the MPC vendor itself. In this scenario, you have no protection that the MPC vendor isn’t modifying its application logs and data. In addition having already been a suspect, the MPC rogue employee or vendor can now frame you for the crime. How would you defend yourself in this scenario?  They hold all the cards, the data, the logs, and the technology.  Unless you’re a cryptography expert, it will be extremely difficult to defend against them.

Conclusion

MPC vendors forget that accountability is a critical part of security, trust, and safety in a multi-user system. Participants on MPC wallets need to be very careful that they can fully trust all of their MPC wallet co-participants. This may not seem like a large risk if your wallet balances are small. But these vendors are encouraging MPC for protecting billions of dollars of assets. 

Multi-signature systems, by contrast, offer all of the benefits that MPC systems offer, but without any ambiguity of accountability.  With a multi-signature system, everyone on the blockchain can publicly see that you did not participate in the transaction without a shadow of a doubt.

I don’t see why anyone participating in the security of assets would even consider using MPC without multi-signature.  The personal risk for the users of the MPC system is massive, and is simply beyond tolerances as the asset values go up.

Proprietary Cryptography

One of the best things about the growth of Bitcoin is how it has propelled research and development in cryptography. What was once a relatively sleepy field of computer science has now become one of the most popular areas of study.  There is no doubt that this additional research will yield great advances in the coming decades. But cryptography is unique from other computer science disciplines, in that there is no margin for error – especially if that cryptography is being used to secure money or digital assets. Unfortunately, the growth of Bitcoin has also fueled a new wave of rushed cryptography.  Rushed cryptography is brand new cryptographic technology that hasn’t had sufficient peer review or test, yet is being promoted as the new panacea to all your hacking woes.

The creators of rushed cryptography always know that they rushed it.  They know they haven’t done sufficient testing or peer review. Testing takes months to years and peer review takes years to decades.  Excited to launch products with their new technology, combined with a little hubris and a little ambition, rushed cryptographers use their new algorithms prematurely. While they make bold claims and brag about the awesomeness of their creation, internally, the rushed cryptographer is actually full of fear – fear that someone will find a bug, a hole, or a problem before they do.  To prevent this from happening, they fall back on the oldest trick in the book: they make it proprietary.

What is proprietary cryptography?  Nobody knows except the creator – the same one that is now trying to sell you his product. The creator says they tested it.  They hired PhD’s, experts and mathematicians to attest they did a great job. They hired security auditors and code reviewers. But did they?  How can you know? How can you possibly use this to secure assets worth millions?

OWASP (the Open Web Application Security Project) has this to say about proprietary cryptography: “Proprietary encryption algorithms are not to be trusted as they typically rely on ‘security through obscurity’ and not sound mathematics. These algorithms should be avoided if possible.

Remember Schneier’s Law: “Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.”

It is an exciting time for cryptography, computer science, and digital assets.  But one of the best things about Bitcoin is that it relies on stable, steady, known algorithms.  This conservative development helps the system, builds trust, and is known to be secure. To those that are rushing new crypto, don’t forget peer review and open source implementations: this is money!

Encrypt It All

I’ve had several of my non-technical friends ask me about Apple’s recent plan to encrypt everything on your smart phone. Of course, Google has been moving to this plan for quite some time already. But the recent announcement has led to the government’s formal claim that this is a threat to national security. What is the truth?

Court Orders are not Required
The first problem facing Apple, Google, and others is that even in America, many of the agencies requesting private data do so without a court order of any kind. Most American citizens think that the US would never do this – but in fact it happens almost every day. As you can see from the Google Transparency Report, there are nearly twice as many requests without court orders as requests with court orders. If you were Google, how would you decide when a request needs to be obeyed and when it should be rejected? Is there any policy that Google, or any company, could possibly apply that would be correct?

Unfortunately, government agencies believe they can simply make these requests and expect the company to comply. If the company does not comply, the company is threatened by that agency – sometimes with threats of jailtime under laws that appear to undermine our constitution. How does the company know whether a request should be obeyed when there is no court order? How can a company remain objective when government agencies are not required to follow due process?

Too Many Governments
The other sad fact is that there are simply too many governments and too many laws. Tech companies are global and need to respect global law. Unfortunately, this means fielding requests from all over the world. How much legal time should a company expect to spend answering requests from agencies around the world? How do you know if a request is authentic? What if you received a request for private data, and complied, but it turned out to be a fraudulent request? Attempting to comply could lead to more damage than not trying to comply. Unfortunately, our governments do nothing to help keep us, the citizens, safe from fraudulent requests. They simply demand information as though they are entitled.

The Only Solution Is Not To Know
The result is that all companies will eventually opt for the only solution that makes sense – encrypt everything. In the wise words of Commander Klink, “I know nothing” is the only way to deal with these requests. If you don’t hold the keys to your customer’s information, you can’t possibly give it to anyone. Unfortunately, since government agencies make too many subjective and illegal requests for information, companies must defend themselves by simply dropping all knowledge. This is the approach we take at BitGo. We will comply with all court-ordered requests, but we know absolutely nothing, so don’t bother asking.

There is a silver lining to this for users, which is that their data will be more private and more secure. But it should be a warning to all of us, even here in America. When your government can spy on you illegally, you do not live in “the land of the free”.

Imagine A World With No Door Handles

It’s Saturday morning. You’ve been dying to get that latest iPhone/iMac/iWatch/iFad. So you head down to the Apple Store. When you arrive, the doors are closed. You try to push the door, but the door won’t open. It looks like people are inside, but you can’t figure out how to get in.

Suddenly, Steve Jobs walks up and asks, “Hi, may I help you?”

“Whoa! Steve Jobs! What an honor to meet you! I was just trying to get into your store, but there are no handles on the doors.”

Steve says, “Oh, of course. We decided to simplify the design of the door, giving it a sleek, elegant new look. Those old fashioned handles were just plain ugly. How do you like our new doors?”

“I guess they look okay. But I can’t get in.”

“Of course you can, you just have to tell the door to open, like this… Open, door.” Calmly, he waits a moment, but nothing happens.

Surprised, Steve tries again, “Open door.” A little louder – “Open door!” “Door, o-pen!” “O-pen Do-or.”

“Hmm…,” Steve shrugs, “the door seems to be having trouble right now. Oooo-PEN DOOR!” And at last, the doors open.

“Ah, there, you see! Had a small glitch, but isn’t that amazing?”

—–

Unfortunately this silly story is indicative of a dangerous pattern we’re seeing in software and hardware today — the rise of Design before Function. A great aesthetic look is great, and there are some instances where it can be more important than minor function. But in general, a great product trumps a great design any day of the week. From Craigslist to Ebay to Amazon to Google – products that people love are products that work more than products that are ‘designed’.

This morning, I tried to add a new keyboard to my iMac. What could be simpler, right? Well, it turns out the Mac simply can’t do it. The Apple “Genius” wants be to lug my entire computer into the Apple store to figure it out. You’ve got to be kidding me, right? In the quest for the elegant look, Apple removed the basic controls for setting up your computer. Now, I have a $2000 brick that needs to be taken to the store. This problem was created solely by design. They were more interested in removing buttons from the back of the computer than they were about making sure basic tasks could be easily done. Ah, but Macs are so easy to use, right?

Custom Protocols Everywhere

I’m here at the IETF discussing HTTP/2.0 (was SPDY).

One of the most interesting developments to me is that the rate of protocol evolution is about to step onto a rocket ship. In the past, we’ve been dependent on standards, browsers, and servers to all simultaneously move to new protocols. It takes about 15 years, in a good case! But mobile apps are cracking this open. With native code, instead of relying on the browser’s network stack, you just include your client library (imagine a facebook client library which speaks a custom protocol optimized just for facebook). Since these apps are not doing general purpose web surfing, they’re only connecting to their own servers. If you own the client, and you own the server, who needs a standard?

This is going to lead to massive innovation in protocols. Companies can invent new paradigms and algorithms, and then either keep these to themselves or kick them back to the IETF for later. From what I hear, SPDY sounds like a promising starting point for these apps, and it will lead to splintering. But you don’t have to wait for Google, Microsoft, and Firefox to agree on protocols anymore! As Will Chan puts it, it’s like HTML for HTTP – everyone can change it in practically real time.

The one last challenge is getting your protocol to work across the internet, where proxies, intermediaries, carriers, and other middlemen are always trying to trip you up. But fortunately, we have an answer for that too – tunnel it all through SSL, which you should be doing as a best practice anyway.

Visualizing SPDY vs HTTP

When we were developing SPDY, we wanted to know if SPDY was always faster or just sometimes faster than HTTP. The following chart is what convinced us that SPDY is indeed almost always faster. I didn’t publish this at that time because I didn’t like looking at web performance as a single number. “Is it faster” includes many variables, such as the network simulated (bandwidth & latency), the packet loss rate, the content chosen, and the measurement (total PLT, first PLT, second PLT, or time-to-first-render). You’d really want a whole stack of these charts, rather than just a single combination.

What I like about this chart is how easy it is to compare two protocols. If the two protocols are identical in performance, all points would be on the midline (red). If one protocol is slower, then the points will fall closer to that axis.

SPDY vs HTTP PLT

Notes about this test:

  • Used a static copy of the Alexa Top-300 websites (excluded porn sites), full content, unchanged
  • Simulated 2Mbps download, 386Kbps upload, 100ms RTT, 0% packet loss
  • No SSL
  • The average speedup was ~40% on this test
  • This test predates CWND changes in the kernel
  • Server was linux, client was chrome on Windows 7

Followup to “Not as SPDY as You Thought”


In the last couple of weeks many people have asked me to comment on guypo’s benchmark blog post, “Not as SPDY as You Thought”.  Guy shared the post with me before he posted it.  Overall, I disagree with his title, but I don’t disagree with his results much, so I haven’t felt pressed to comment.  He tested something that nobody else has tested, and after reviewing his methodology, it’s mostly fine. Some suggestions have been made for improvement, which he was very open to, and we’ll likely see additional test results coming soon.  But his results are not contrary to Google’s or my own results; they’re just a different test.

The reason his results aren’t contradictory is because Guy’s test doesn’t test SPDY page loads.  Guy tested partial SPDY page loads, not full SPDY page loads.  More specifically, he tested this case:  if you upgrade your primary domain, but few of your other domains, your CDN, etc, how does SPDY perform?  This is a perfectly valid case to test – especially when sites may take an incremental approach to upgrading.  And I’m not surprised at all that if you only upgrade half your page to SPDY, that the results are not as good as if you upgrade all of your page to SPDY.

In the report, Guy breaks out domains into “1st party domains” and “3rd party domains”.  He argues that since you don’t have control over the 3rd party content servers, it may not get SPDY-ized, and therefore his test is valid.  Ok – thats a good point.  But how do we define “3rd party”?  I consider “1st party” to be any content which you, as the site owner have control to change directly.  So if you load your main content from www.google.com and your images from images.google.com, those are both 1st party domains.  Unfortunately, Guy’s classifier did not classify them this way.

To understand what I mean, lets take a look at the domains used on a few sample pages and how his test loaded resources from them.  I simply picked three from his test results.   Every page tested is different, but the patterns below are common to many of the top websites.

www.cnn.com

www.ebay.com

www.yahoo.com

Domains Used

www.cnn.com

icompass.insighexpressai.com

z.cdn.turner.com

i.cdn.turner.com

www.facebook.com

ad.insightexpressai.com

s-static.ak.fbcdn.com

svcs.cnn.com

gdyn.cnn.com

s-external.ak.fbcdn.com

www.ebay.com

ir.ebaystatic.com

i.ebayimg.com

q.ebaystatic.com

p.ebaystatic.com

thumbs4.ebaystatic.com

rover.ebay.com

srx.main.ebayrtm.com

rtm.ebaystatic.com

ad.doubleclick.net

pics.ebaystatic.com

s0.2mdn.net

www.yahoo.com

l.yimg.com

us.bc.yahoo.com

v4test.yahoo.com

v4test2.yahoo.com

v4test3.yahoo.com

dstest.yahoo.com

dstest2.yahoo.com

dstest3yahoo.com

ad.doubleclick.net

SPDY domains

1

1

1

Non SPDY domains

9

11

9

Resources fetched over SPDY

40

20

48

Resources fetched over HTTP

46

37

26

“1st party” resources that could have been SPDY but were NOT in Guy’s test

31

34

24

I hope you can now see why I don’t discredit Guy’s test results.  On these pages, 25-50% of the 1st party controlled resources which could have been loaded over SPDY weren’t loaded over SPDY at all. If you only partially use SPDY, you only get partial results. This is okay to me.

Nobody should think I’m discrediting Guy’s work here.  He’s done a great job with great vigor, and it takes an incredible amount of time to do these tests.  He’s planning to do more tests, and I’m very thankful that he is doing this and that Akamai is letting him do so.

In the next wave of tests, I expect we’ll see that SPDY benefits are increased.  Keep in mind that your average site isn’t going to see the 2x speed boost.   The overall benefit of SPDY is conditional on many factors, and websites today have not yet been tuned for SPDY.  Most sites will see benefits in the 5-20% range (like Google did).   A few will see 50% better.  A few will see worse.  Everyone will benefit from new optimization possibilities, less complex websites, and a more network and mobile friendly protocol. More testing like Guy’s is the key to a better HTTP/2.0.

The Web only Works Thanks to Reload… (and why the mobile web fails)

When you build a mobile app that uses the network, it is instantly clear that your app needs to be robust against all sorts of network failures:

  • network completely down
  • network transitioning from WiFi to 3G
  • network insanely slow (EDGE!)
  • network timeouts – is 5s long enough to wait? 10s? 30?
  • network radio warmup is slow
  • what happens if your app is terminated before finishing a critical network request?
  • etc…
  • Dealing with these is hard, but not impossible. Applications retry at various levels trading off battery life and user perceived performance all the time. After enough work, you can make the app functional.

    But if you try to write an app using HTML5, how do you do this?

    You can’t.

    The web simply isn’t designed for partial network failures on a web page. Web pages are comprised of hundreds of subresources from multiple sites these days. What happens when CSS #5 out of 32 resources fails to load? What happens when you can’t connect to i.amazon.com even though you already loaded the main content from www.amazon.com? Does your application even know? Generally not. You can trap for some sorts of errors; but the browser will NOT automatically retry on any of these failures for you. Most likely you’ll be left with a web page which renders incorrectly, hangs forever, or throws javascript errors on the whole page because a critical set of code just never got loaded.

    Of course, these problems can happen on your desktop, too. But they generally don’t happen as often. And when they do occur, every user easily becomes his own network administrator thanks to the web browser’s handy dandy “reload” button. How many times have you drummed your fingers for a few seconds before reloading a page? Probably a lot! But on mobile, network errors occur *all* the time. Do mobile apps have ‘reload’ buttons? Generally not – users are becoming quite accustomed to apps which can handle their own errors gracefully.

    Sadly, I think this is one more nail in the coffin against HTML5 on mobile. Browsers need to be completely overhauled to properly deal with network errors and retries before HTML5 can be a serious contender to native applications.