P2SH Safe Addresses

I wrote this whitepaper a while ago, and have now fully implemented it as part of BitGo. If you own bitcoin, it is worth checking out!


P2SH Safe Address

mike belshe

This paper describes a mechanism for using Bitcoin’s P2SH functionality to build a stronger, more secure web wallet.

Bitcoin addresses (where your bitcoins are stored) are secured today using public key cryptography and the Elliptic Curve Digital Signature Algorithm (ECDSA). This offers very strong security. But the secret keys used within ECDSA are lengthy 256 numbers which humans can’t remember, and the security of your bitcoin hinges on how safely you can protect this key from others. To help us protect and manage our keys, users employ bitcoin wallets. There are many wallets available to choose from, and each offers its unique benefits for ease of use, security, and features. But wallets can be divided into two basic categories:

  • Client-side Wallets
    These wallets, such as the original Satoshi Client, run using software installed locally on the user’s computer.
  • Web Wallets
    These wallets are hosted on a web site and require no custom software installation from the user.

Client-side wallets

The advantage of a client side wallet is that your bitcoin keys are entirely your own. No intermediaries are required to help you transact. The disadvantage of the client side wallet is that the security is entirely your own. In effect you are the guard of your own bank. As such you need to: prevent malware and viruses from stealing your keys maintain and update proper backups of your keys enforce physical security of the computer(s) containing the keys (e.g. locked with an encrypted hard disk). Accessing your bitcoins from multiple computers can be difficult, as it requires you to transfer the keys safely between multiple computers. Further, because most users take extra precautions with their passwords for their bitcoin cash, forgetting or losing unusually ‘strong’ passwords becomes a real threat of loss.

Web Wallets

Web Wallets have the advantage that they are accessible through the web, from anywhere. The web site hosting your wallet needs to be a trusted party, as they often require direct access to your keys, or they may hold your keys while you don’t have them at all. Assuming that the website does a good job managing the security of your keys, this can be an advantage, as you don’t need to do it yourself.
But the disadvantages are obvious. A web site holding many keys for millions of users is a very obvious target for attackers. If the web site is hacked, you will lose your bitcoin. Similarly, if the website is shutdown due to failure to meet regulatory compliance, you will lose your bitcoin as well.

Pay To Script Hash (a.k.a. P2SH)

P2SH is a new type of bitcoin address which was introduced as part of Bitcoin Improvement Proposal 16 (BIP 16) in early 2012. P2SH addresses can be secured by more complex algorithms than traditional bitcoin addresses. In this paper, we evaluate using a 2-of-3 signature address, which we’ll call a “2-of-3 address”.
Unlike traditional bitcoin addresses, which are secured with a single ECDSA key, 2-of-3 addresses are secured with three ECDSA keys. Depositing funds into the 2-of-3 address is the same as depositing funds into a standard bitcoin address. However, withdrawing funds from the 2-of-3 address requires at least 2 of the 3 keys to sign.

Using a 2-of-3 address offers several advantages:

  • you can give a trusted party a single key for final approval on transactions without enabling them to initiate transactions on your funds alone.
  • you can lose a key but not lose access to your funds.
  • you can share a key with multiple, trusted parties who individually cannot access your funds, but can if they work together.

A Proposal for a 2-of-3 Address Web Wallet Implementation

In this section, we propose an implementation of a web wallet using the 2-of-3 address. It provides the following features

  • Safety
    • The service cannot initiate a transaction by itself
    • Stealing the user’s online password is not sufficient to steal funds
    • Stealing the user’s online private key is not sufficient to steal funds
    • Malware on the user’s computer cannot steal funds
  • Convenience
    • The user can access his funds from any computer
    • The user does not need to remember his private key and can access funds with a password and two-factor authentication.
  • Recovery
    • The user can recover funds even if the service is shutdown due to regulatory reasons
    • The user can lose his website password and not lose his funds
    • The user can lose his private key and not lose his funds
  • Privacy
    • Privacy must be maintained for the user’s funds

This implementation will rely upon:

  • A service (e.g. a website) with all communications over TLS.
  • Coordination between a browser and that service
  • Use of 2-factor authentication
  • Use of strong passwords

2-of-3 Address Creation

The mechanics of creating the 2-of-3 address is very important. In this proposal, it will be done both on the user’s computer and on the website. Critically, the user will generate 2 keys while the server will generate one. Address creation time is the only time when two or more of the keys are on the same computer concurrently.

The process starts with the user’s browser (or client-side key creator) generating 2 ECDSA keys:

  • The user’s key-pair
  • A backup key-pair

The backup private key-pair will be printed out and stored completely offline. It is only for fund recovery. The backup public key will be stored with the service. The service never sees the backup private key and cannot use it to unlock funds.

The user’s private key will be encrypted on the user’s machine with a strong password of the user’s choice. The encrypted private key and the public key will be stored in the service. Because the private key is encrypted with a password the service has never seen, the service cannot use this key to unlock funds.

The server will then create a 3rd key. The private key will be encrypted with a strong password known to the service and stored on the server. The server will use the 2 public keys from the user as well as the service key to create the 2-of-3 address. The server will notify the user of the server’s public key, as this will be critical for recovering funds from the address if the service ever goes down. The user will print out a copy of all 3 public keys and store them securely.

With this system, we now have an address where the user has 1 key, the service has 1 key, and the 3rd key has been saved for later use.

Withdrawing Funds from the 2-of-3 Address

To withdraw funds from the 2-of-3 address, the following steps will need to take place.

First, the user will login or authenticate to the service, and inform the service that she will make a withdrawal. The service will require the user to further authenticate with a 2-factor authentication challenge to a smartphone or mobile device. Note: 2-factor authentication is required because even strong passwords can be stolen with a keylogger.

Upon validation of the 2-factor authentication, the service sends the user’s encrypted private key to the user’s browser. The browser will prompt the user for the user’s password to unlock the encrypted private key.

Executing within the user’s browser, the application creates the bitcoin transaction for the withdrawal, unlocks the encrypted private key, and signs the transaction with a single signature.

Finally, the signed transaction is then sent to the service. The service validates the transaction, and if suitable, applies the 2nd signature using its private key. Note that the service will likely implement transaction limits. If, for some reason, the user’s account was compromised, the service can refuse to sign large transactions unless further authentication or the backup key signature is presented.

Maintaining Privacy

To maintain maximal privacy, it is important to not re-use bitcoin addresses. However, re-generating such keys repeatedly with each transaction would make many of the backup benefits that come with this system difficult. Users of bitcoin standard addresses already face this problem today and use a variety of deterministic wallet mechanisms to generate multiple keys from a single source.

The same techniques can be applied to the 2-of-3 address. Any key used as a signature should be rotated to a new address based on the next sequence in the deterministic key.

As a compromise solution, the 2-of-3 address offers one more option: only rotating the server’s key. Since the 2-of-3 key is generated from 3 keys, one of which is managed by the service, we can rotate the user’s funds to a new address by only rotating the server’s key. The resulting address cannot be correlated to the original 2-of-3 address. However, upon spending of the outputs, the public keys will again be revealed and a correlation could be made at that time. To maintain the ability for the user to extract funds without the service, the service will need to send the newly minted service public key to the user for safekeeping. This can be done via email. But again for maximal privacy, use of deterministic key rotation is recommended.

Other Advantages

Using multi-signature wallets provides flexibility for the user to share keys with trusted family without exposing all funds. For example, a user may decide to give one key to his sister, and another to his lawyer, with instructions to get the bitcoin when the user dies. With a traditional bitcoin address, the lawyer and sister would both have full access to the user’s funds. With a 2-of-3 wallet, they would need to collude against the user. But overall, the 2-of-3 address offers a lot of flexibility.


No security mechanism is perfect. One potential weakness with the 2-of-3 address is that it does have 2 of the 3 keys online in the user’s browser at the time of address creation. Malware that specifically targeted an application using 2-of-3 wallets could lie-in-wait of an address to be created, steal the keys, and then extract the funds later. However, any wallet, client or server suffers from this problem. With a 2-of-3 address, the exposure to malware is mostly limited to address creation time, whereas traditional addresses are exposed to this weakness any time you transact. Hardware wallets may be the best mitigator against this particular attack.

My Uncle

I love my Uncle. He’s always been there for me and he is the best Uncle I could ever want.

But my Uncle has a serious problem.

You see, my Uncle is a bit of a dreamer, and in recent years every business he’s started has ended in failure. He’s articulate and bright, so he gets hired to run huge, multi-billion dollar businesses across the country. But each time, despite his good intentions, he always drives the business bankrupt.

So while my Uncle used to be rich, today my Uncle is bankrupt. Miraculously, he has held onto his businesses and continues to try to make them work. But his monthly bills on them are 150% more than his income! Sadly, he’s already tapped out everyone in our family to finance his debt. Each of us that is working owes about $150k to help him get out of debt.

The worst part is that my Uncle has stopped listening to the family. Because of his position, a lot of outsiders want to manipulate my Uncle, and they do so over and over again. This is probably what hurts his businesses most.

Obviously, this has been extremely stressful for our family – so much so that recently we’ve started fighting. Half of us want to cut my Uncle off and tell him “enough is enough!” But we all love my Uncle and so the other half really wants to see him succeed in his next endeavor. Since he won’t listen to us, some of us have stopped giving him his allowance, and now he’s really angry. He says that we promised him an allowance and that we have to keep giving it. But times are hard on all of us, and he’s making it worse.

And of course, as is typical with my Uncle every four years or so, he has a new business venture he wants to start. The business sounds great and he makes a compelling pitch for it. If it works it might help a lot of people! But my Uncle’s track record is so poor. He’s got so much debt and other problems, that we just have to say no. This is going to be a tough lesson for my poor Uncle, but he’s risking the financial security of the entire family now. We simply can’t allow him to ruin all of us.

So, Uncle Sam – God Bless you. I love you and always will. I hope you recover, but for now I have to cut you off. Once you have your existing businesses in order, we can talk again. But until then, I can’t help you take on more challenges. You need to finish what you start before you start more.

Your devoted nephew, Mike.

The Feds Are About To Learn How Awesome Bitcoin Is

In the wake of the Silk Road bust this week, I think we’ll see some new thinking around Bitcoin. Contrary to popular belief, Bitcoin is not anonymous at all. It never was, and it never will be. It is probably the most public trading system ever created. This sounds scary, but unless you’re a criminal it’s not really.

For Dread Pirate Roberts, aka Ross Ulbright, it seemed anonymous. Until someone tied his zeroes and ones back to him. Now we know who he is, and the FBI busted him. Further, with the details of DPR’s bitcoin accounts in their possession, the FBI now knows everyone that he has ever transacted with. Ever. You can’t do that with cash! Every person that used silk road is a leaf in a tree called the bitcoin Blockchain.

The FBI will, if they have not already, trace these so-called anonymous Bitcoin accounts back to transactions at popular trading exchanges like Mt Gox, Coinbase, and Bitstamp. These exchanges, because they trade real money for bitcoin, know their user’s true identities. This will enable the government to get warrants for those exchanges to turn over those identities. I guarantee that the vast majority of Silk Road users did not have the technical savvy required to stay anonymous. They bought coins at these exchanges, then headed over to Silk Road. They shouldn’t have believed the anonymous hype – it’s just not true.

In other words, digital drug busts are a FBI agent’s wet dream. Every transaction ever done is public, and its just a matter of a half-dozen small search warrants to get to every user that ever used the Silk Road. Even if it takes them 10 years to bring down the kingpin drug dealer, they will be able to retroactively identify all of this customers.

So unless the operators of the exchanges have the bravery that Lavar Levinson of Lavabit has, these records will go to the FBI within a matter of days.

On one hand, this is terrible! It’s such an invasion of privacy. But on the other hand, is it? The law enforcement agents are doing exactly the same work they’ve always done, but this time its digital. We created these laws and we created these agencies. With digital currencies, they can do their job with incredible efficiency. The law-abiding citizen should be ecstatic – we could literally haul a million druggies to jail as soon as the paperwork is done.

I for one am glad that Silk Road is gone. It gave Bitcoin a bad name. I hope this bust goes far and wide and helps the government recognize that Bitcoin is an opportunity rather than an obstacle.

Imagine A World With No Door Handles

It’s Saturday morning. You’ve been dying to get that latest iPhone/iMac/iWatch/iFad. So you head down to the Apple Store. When you arrive, the doors are closed. You try to push the door, but the door won’t open. It looks like people are inside, but you can’t figure out how to get in.

Suddenly, Steve Jobs walks up and asks, “Hi, may I help you?”

“Whoa! Steve Jobs! What an honor to meet you! I was just trying to get into your store, but there are no handles on the doors.”

Steve says, “Oh, of course. We decided to simplify the design of the door, giving it a sleek, elegant new look. Those old fashioned handles were just plain ugly. How do you like our new doors?”

“I guess they look okay. But I can’t get in.”

“Of course you can, you just have to tell the door to open, like this… Open, door.” Calmly, he waits a moment, but nothing happens.

Surprised, Steve tries again, “Open door.” A little louder – “Open door!” “Door, o-pen!” “O-pen Do-or.”

“Hmm…,” Steve shrugs, “the door seems to be having trouble right now. Oooo-PEN DOOR!” And at last, the doors open.

“Ah, there, you see! Had a small glitch, but isn’t that amazing?”


Unfortunately this silly story is indicative of a dangerous pattern we’re seeing in software and hardware today — the rise of Design before Function. A great aesthetic look is great, and there are some instances where it can be more important than minor function. But in general, a great product trumps a great design any day of the week. From Craigslist to Ebay to Amazon to Google – products that people love are products that work more than products that are ‘designed’.

This morning, I tried to add a new keyboard to my iMac. What could be simpler, right? Well, it turns out the Mac simply can’t do it. The Apple “Genius” wants be to lug my entire computer into the Apple store to figure it out. You’ve got to be kidding me, right? In the quest for the elegant look, Apple removed the basic controls for setting up your computer. Now, I have a $2000 brick that needs to be taken to the store. This problem was created solely by design. They were more interested in removing buttons from the back of the computer than they were about making sure basic tasks could be easily done. Ah, but Macs are so easy to use, right?

Custom Protocols Everywhere

I’m here at the IETF discussing HTTP/2.0 (was SPDY).

One of the most interesting developments to me is that the rate of protocol evolution is about to step onto a rocket ship. In the past, we’ve been dependent on standards, browsers, and servers to all simultaneously move to new protocols. It takes about 15 years, in a good case! But mobile apps are cracking this open. With native code, instead of relying on the browser’s network stack, you just include your client library (imagine a facebook client library which speaks a custom protocol optimized just for facebook). Since these apps are not doing general purpose web surfing, they’re only connecting to their own servers. If you own the client, and you own the server, who needs a standard?

This is going to lead to massive innovation in protocols. Companies can invent new paradigms and algorithms, and then either keep these to themselves or kick them back to the IETF for later. From what I hear, SPDY sounds like a promising starting point for these apps, and it will lead to splintering. But you don’t have to wait for Google, Microsoft, and Firefox to agree on protocols anymore! As Will Chan puts it, it’s like HTML for HTTP – everyone can change it in practically real time.

The one last challenge is getting your protocol to work across the internet, where proxies, intermediaries, carriers, and other middlemen are always trying to trip you up. But fortunately, we have an answer for that too – tunnel it all through SSL, which you should be doing as a best practice anyway.

Visualizing SPDY vs HTTP

When we were developing SPDY, we wanted to know if SPDY was always faster or just sometimes faster than HTTP. The following chart is what convinced us that SPDY is indeed almost always faster. I didn’t publish this at that time because I didn’t like looking at web performance as a single number. “Is it faster” includes many variables, such as the network simulated (bandwidth & latency), the packet loss rate, the content chosen, and the measurement (total PLT, first PLT, second PLT, or time-to-first-render). You’d really want a whole stack of these charts, rather than just a single combination.

What I like about this chart is how easy it is to compare two protocols. If the two protocols are identical in performance, all points would be on the midline (red). If one protocol is slower, then the points will fall closer to that axis.


Notes about this test:

  • Used a static copy of the Alexa Top-300 websites (excluded porn sites), full content, unchanged
  • Simulated 2Mbps download, 386Kbps upload, 100ms RTT, 0% packet loss
  • No SSL
  • The average speedup was ~40% on this test
  • This test predates CWND changes in the kernel
  • Server was linux, client was chrome on Windows 7

I will donate $1,000,000 to the US Govt This Year

The US debt is a big problem (currently ~$16T, or about ~$50K per person). I’m willing to do my share to help pay it off. To do this, I need the government to commit to a sustainable plan so that I’m not just throwing good money after bad.

Thus I make this promise:

If Washington can make spending match tax receipts, I will donate an extra $1,000,000 to the USA to help pay down the debt.

Followup to “Not as SPDY as You Thought”

In the last couple of weeks many people have asked me to comment on guypo’s benchmark blog post, “Not as SPDY as You Thought”.  Guy shared the post with me before he posted it.  Overall, I disagree with his title, but I don’t disagree with his results much, so I haven’t felt pressed to comment.  He tested something that nobody else has tested, and after reviewing his methodology, it’s mostly fine. Some suggestions have been made for improvement, which he was very open to, and we’ll likely see additional test results coming soon.  But his results are not contrary to Google’s or my own results; they’re just a different test.

The reason his results aren’t contradictory is because Guy’s test doesn’t test SPDY page loads.  Guy tested partial SPDY page loads, not full SPDY page loads.  More specifically, he tested this case:  if you upgrade your primary domain, but few of your other domains, your CDN, etc, how does SPDY perform?  This is a perfectly valid case to test – especially when sites may take an incremental approach to upgrading.  And I’m not surprised at all that if you only upgrade half your page to SPDY, that the results are not as good as if you upgrade all of your page to SPDY.

In the report, Guy breaks out domains into “1st party domains” and “3rd party domains”.  He argues that since you don’t have control over the 3rd party content servers, it may not get SPDY-ized, and therefore his test is valid.  Ok – thats a good point.  But how do we define “3rd party”?  I consider “1st party” to be any content which you, as the site owner have control to change directly.  So if you load your main content from www.google.com and your images from images.google.com, those are both 1st party domains.  Unfortunately, Guy’s classifier did not classify them this way.

To understand what I mean, lets take a look at the domains used on a few sample pages and how his test loaded resources from them.  I simply picked three from his test results.   Every page tested is different, but the patterns below are common to many of the top websites.




Domains Used

































SPDY domains




Non SPDY domains




Resources fetched over SPDY




Resources fetched over HTTP




“1st party” resources that could have been SPDY but were NOT in Guy’s test




I hope you can now see why I don’t discredit Guy’s test results.  On these pages, 25-50% of the 1st party controlled resources which could have been loaded over SPDY weren’t loaded over SPDY at all. If you only partially use SPDY, you only get partial results. This is okay to me.

Nobody should think I’m discrediting Guy’s work here.  He’s done a great job with great vigor, and it takes an incredible amount of time to do these tests.  He’s planning to do more tests, and I’m very thankful that he is doing this and that Akamai is letting him do so.

In the next wave of tests, I expect we’ll see that SPDY benefits are increased.  Keep in mind that your average site isn’t going to see the 2x speed boost.   The overall benefit of SPDY is conditional on many factors, and websites today have not yet been tuned for SPDY.  Most sites will see benefits in the 5-20% range (like Google did).   A few will see 50% better.  A few will see worse.  Everyone will benefit from new optimization possibilities, less complex websites, and a more network and mobile friendly protocol. More testing like Guy’s is the key to a better HTTP/2.0.

The Web only Works Thanks to Reload… (and why the mobile web fails)

When you build a mobile app that uses the network, it is instantly clear that your app needs to be robust against all sorts of network failures:

  • network completely down
  • network transitioning from WiFi to 3G
  • network insanely slow (EDGE!)
  • network timeouts – is 5s long enough to wait? 10s? 30?
  • network radio warmup is slow
  • what happens if your app is terminated before finishing a critical network request?
  • etc…
  • Dealing with these is hard, but not impossible. Applications retry at various levels trading off battery life and user perceived performance all the time. After enough work, you can make the app functional.

    But if you try to write an app using HTML5, how do you do this?

    You can’t.

    The web simply isn’t designed for partial network failures on a web page. Web pages are comprised of hundreds of subresources from multiple sites these days. What happens when CSS #5 out of 32 resources fails to load? What happens when you can’t connect to i.amazon.com even though you already loaded the main content from www.amazon.com? Does your application even know? Generally not. You can trap for some sorts of errors; but the browser will NOT automatically retry on any of these failures for you. Most likely you’ll be left with a web page which renders incorrectly, hangs forever, or throws javascript errors on the whole page because a critical set of code just never got loaded.

    Of course, these problems can happen on your desktop, too. But they generally don’t happen as often. And when they do occur, every user easily becomes his own network administrator thanks to the web browser’s handy dandy “reload” button. How many times have you drummed your fingers for a few seconds before reloading a page? Probably a lot! But on mobile, network errors occur *all* the time. Do mobile apps have ‘reload’ buttons? Generally not – users are becoming quite accustomed to apps which can handle their own errors gracefully.

    Sadly, I think this is one more nail in the coffin against HTML5 on mobile. Browsers need to be completely overhauled to properly deal with network errors and retries before HTML5 can be a serious contender to native applications.