https://hackingdistributed.com/hackingdistributed.atomHacking DistributedHacking Distributed2020-03-11T14:00:00ZAttacking the DeFi Ecosystem with Flash Loans for Fun and Profit2020-03-11T14:00:00Z2020-03-11T14:00:00ZKaihua Qin and Liyi Zhou and Benjamin Livshits and Arthur Gervais<p><strong>TL;DR</strong> Flash loans are a recent blockchain smart contract construct that enable the issuance of loans that are only valid within one transaction and must be repaid by the end of that transaction. This concept has led to a number of interesting attack possibilities, some of which <a class="reference external" href="https://etherscan.io/tx/0xb5c8bd9430b6cc87a0e2fe110ece6bf527fa4f170a4bc8cd032f768fc5219838">have</a> <a class="reference external" href="https://etherscan.io/tx/0x762881b07feb63c436dee38edd4ff1f7a74c33091e534af56c9f7d49b5ecac15">been</a> exploited recently (February 2020). We analyze those two existing DeFi flash loan attack vectors with significant ROIs (beyond 500k%). We go on to formulate finding flash loan-based attack parameters as an optimization problem over the state of the underlying DeFi state and Ethereum blockchain. Our results show how the two previously executed attacks can be boosted to result in a profit of 829.5k USD (instead of 350k USD) and 1.1M USD (instead of 600k USD), respectively. For a light read feel free to proceed, for all details, we would like to refer to <a class="reference external" href="https://arxiv.org/pdf/2003.03810.pdf">our paper</a>.</p>
<div class="section" id="flash-loans-the-new-defi-kid-on-the-block">
<h2>Flash Loans - the New DeFi Kid on the Block</h2>
<p>In the traditional economy, when a lender grants a loan to a borrower, there always is a risk that the borrower might not repay its debt. This brings us to the following question:</p>
<blockquote>
<em>What if it were possible to offer credit, without the risk that the borrower does not pay back the debt?</em></blockquote>
<div class="figure align-left">
<img alt="Flash loan steps within one blockchain transaction." class="no-border" src="https://hackingdistributed.com/images/2020/flash-loan.png" style="width: 700px;" />
<p class="caption">A flash loan, taken from a liquidity pool, used, and repaid within one transaction.</p>
</div>
<p>Here, blockchain-based flash loans come into play. A flash loan is a loan that is only valid <em>within</em> one blockchain transaction. Flash loans fail, if the borrower does not repay its debt <em>before the end</em> of the transaction borrowing the loan. That is, because a blockchain transaction can be reverted during its execution, if the condition of a repayment is not satisfied.</p>
<p>The assets for the flash loan are taken from a publicly funded smart contract pool. At the time of writing, some of the biggest flash loan pools are provided by <a class="reference external" href="http://aave.com/">Aave</a> and <a class="reference external" href="https://dydx.exchange/">dYdX</a>, each exceeding 20M USD in value. Aave charges an interest rate of 0.09%, while dYdX’s smart contract only demands the repayment plus 1 Wei in fees.</p>
<p>To gauge flash loan usage, we collected flash loan data between the 8th of January 2020 and the 26th of February 2020 with a full archive Ethereum node gathering all event logs from the Aave <a class="reference external" href="https://etherscan.io/address/0x398eC7346DcD622eDc5ae82352F02bE94C62d119">smart contract</a>. Note that Aave only went live early January 2020.
We observe a total of 105 loans, and most flash loans interact with lending/exchange DeFi systems (e.g. Compound, Dai, MakerDAI, Uniswap). The flash loan's transaction costs (i.e. gas) appear significant (at times beyond 4M gas, compared to 21k gas for regular Ether transfer). The full details can be found in Figure 5 in the <a class="reference external" href="https://arxiv.org/pdf/2003.03810.pdf">accompanying paper</a>.</p>
</div>
<div class="section" id="flash-loan-use-cases">
<h2>Flash Loan Use Cases</h2>
<p>But what are flash loans really good for? Based on the community feedback and our own thoughts we identified four uses cases for flash loans: arbitrage, wash trading, collateral swapping and flash minting.</p>
<div class="section" id="arbitrage">
<h3>Arbitrage</h3>
<p>Given flash loans, a trader can perform arbitrage on different DEX, without the need to hold a monetary position or being exposed to volatility risks. The trader can simply open a loan, perform an arbitrage trade and pay back the loan plus interests. One may argue that flash loans render arbitrage risk-free, the risks of smart contract vulnerabilities however remain.</p>
<p><strong>Arbitrage Example</strong>
On the 18th Jan 2020, a flash loan borrowed 3,137.41 DAI from Aave to make an <a class="reference external" href="https://etherscan.io/tx/0x4555a69b40fa465b60406c4d23e2eb98d8aee51def21faa28bb7d2b4a73ab1a9">arbitrage trade</a> on the AMM DEX Uniswap. To prepare the arbitrage, DAI is converted to 3137.41 SAI using MakerDAO's <a class="reference external" href="https://etherscan.io/address/0xc73e0383f3aff3215e6f04b0331d58cecf0ab849">migration contract</a>. The arbitrage converts SAI for 18.16 ETH using SAI/ETH Uniswap, and then immediately converts 18.16 ETH back to 3,148.39 DAI using DAI/ETH Uniswap. After the arbitrage, 3,148.38 DAI is transferred back to Aave to pay the loan plus fees. This transaction costs 0.02 ETH of gas. Note that even though the transaction sender gains 3.29 DAI from the arbitrage, this particular transaction is not profitable.</p>
</div>
<div class="section" id="wash-trading">
<h3>Wash Trading</h3>
<p>Another potential flash loan use case is wash trading.</p>
<p>The trading volume of an asset, is a metric indicating its trading popularity. The most popular assets therefore, are supposed to be traded the most --- e.g. Bitcoin to date enjoys the highest trading volume (reported up to 50T USD per day) of all cryptocurrencies.</p>
<p>Malicious exchanges or traders can mislead other traders by artificially inflating the trading volume of an asset to attract interests. According to the Blockchain Transparency Institute <a class="reference external" href="https://www.bti.live/bti-september-2019-wash-trade-report/">Market Surveillance report</a>, from September 2019, 73 out of the top 100 exchanges on Coinmarketcap were wash trading over 90% of their volumes. Wash trading of securities appears illegal under U.S. law.</p>
<p>While wash trading on centralised exchanges may be performed at little to no cost, and possibly even without real assets, wash trading on DEX requires wash traders to hold and use assets. Flash loans can remove this “obstacle” to reduce the costs to loan interests, trading fees, and (blockchain) transaction fees. A wash trading endeavour to increase the 24-hour volume of the ETH/DAI market of Uniswap by 50% would for instance cost about 1,298 USD (with a flash loan from dYdX).</p>
<p><strong>Wash trading example</strong>
On March 2nd, 2020, a <a class="reference external" href="https://etherscan.io/tx/0xf65b384ebe2b7bf1e7bd06adf0daac0413defeed42fd2cc72a75385a200e1544">flash loan</a> of 0.01 ETH borrowed from dYdX performed two back-and-forth trades (first converted 0.01 ETH to 122.1898 LOOM and then converted 122.1898 LOOM back to 0.0099 ETH) on the Uniswap ETH/LOOM market. The 24-hour trading volume of the ETH/LOOM market increased by 25.8% (from 17.71 USD to 22.28 USD) as a result of the two trades.</p>
<p>We elaborate on two other flash loan use cases, collateral swapping and flash minting, <a class="reference external" href="https://arxiv.org/pdf/2003.03810.pdf">in our paper</a>.</p>
</div>
</div>
<div class="section" id="flash-loan-post-mortem">
<h2>Flash Loan Post-Mortem</h2>
<p>In the following we observe two adversarial flash loan trades, one <a class="reference external" href="https://etherscan.io/tx/0xb5c8bd9430b6cc87a0e2fe110ece6bf527fa4f170a4bc8cd032f768fc5219838">pump and arbitrage</a>, and one <a class="reference external" href="https://etherscan.io/tx/0x762881b07feb63c436dee38edd4ff1f7a74c33091e534af56c9f7d49b5ecac15">oracle manipulation</a> attack.</p>
<div class="section" id="pump-and-arbitrage">
<h3>Pump and Arbitrage</h3>
<p>A flash loan transaction executed on the 15th of February 2020, followed by 74 transactions, yielded a profit of 1'193.69 ETH (350k USD) given a transaction fee of 132.36 USD (cumulative 50'237'867 gas, 0.5 ETH). We first discuss the details of this transaction, and then go about to explain why the adversary <em>could have earned</em> a profit exceeding 829.5k USD (the numbers in boxes in the Figure below correspond to the optimal attack parameters we find, while the non-surrounded numbers are those chosen by the adversary).</p>
<p>The core of this trade involves a margin trade on a DEX (bZx) to increase the price of WBTC/ETH on another DEX (Uniswap) and thus creates an arbitrage opportunity. The trader then borrows WBTC using ETH as collateral (on Compound), and then purchases ETH at a “cheaper” price on the distorted (Uniswap) DEX market. To maximise the profit, the adversary then converts the “cheap” ETH to purchase WBTC at a non-manipulated market price over a period of two days after the flash loan. The adversary then returns WBTC (to Compound) to redeem the ETH collateral. The following figure outlines how this trade mainly consists of two parts. For simplicity, we omit the conversion between WETH (the ERC20-tradable version of ETH) and ETH. The full details are outlined in <a class="reference external" href="https://arxiv.org/pdf/2003.03810.pdf">the paper</a>.</p>
<div class="figure align-left">
<img alt="Flash loan pump and arbitrage attack." class="no-border" src="https://hackingdistributed.com/images/2020/flash-loan-attack1.png" style="width: 700px;" />
<p class="caption">Flash loan pump and arbitrage attack.</p>
</div>
</div>
<div class="section" id="oracle-manipulation">
<h3>Oracle Manipulation</h3>
<p>In the following, we discuss the details of a second flash loan trade, which yields a profit of 2,381.41 ETH (c. 650k USD) within a <a class="reference external" href="https://etherscan.io/tx/0x762881b07feb63c436dee38edd4ff1f7a74c33091e534af56c9f7d49b5ecac15">single transaction</a> executed on the 18th of February 2020, given a transaction fee of 118.79 USD. We again find that the chosen attack parameters were sub-optimal and present attack parameters that would yield a profit of 1.1M USD instead. For this attack, the adversary involves three different exchanges for the same sUSD/ETH market pair (the Kyber-Uniswap reserve, Kyber, and Synthetix). Two of these exchanges (Kyber, Kyber-Uniswap) act as price oracles for the lending platform (bZx) from which the adversary borrows assets.</p>
<p><strong>Price oracle:</strong> One of the goals of the DeFi ecosystem is to not rely on trusted third parties. This premise holds both for asset custody as well as additional information, such as asset pricing. One common method to determine an asset price is hence to rely on the pricing information of an on-chain DEX (e.g. Uniswap). One drawback of this approach, is the danger of a DEX price manipulation.</p>
<p><strong>Attack intuition:</strong> The core of this trade is an oracle manipulation using a flash loan on the asset pair sUSD/ETH. The manipulation lowers the price of sUSD/ETH (from 268.30 sUSD/ETH to 106.05 sUSD/ETH on Uniswap and 108.44 sUSD/ETH on Kyber Reserve). In a second step, the adversary benefits from this sUSD/ETH price decrease by borrowing ETH with sUSD as collateral. The full details can again be found in <a class="reference external" href="https://arxiv.org/pdf/2003.03810.pdf">the paper</a>.</p>
<div class="figure align-left">
<img alt="Flash loan pump and arbitrage attack." class="no-border" src="https://hackingdistributed.com/images/2020/flash-loan-attack2.png" style="width: 700px;" />
<p class="caption">Flash loan oracle manipulation attack.</p>
</div>
</div>
</div>
<div class="section" id="optimal-defi-attack-parameter-generation">
<h2>Optimal DeFi Attack Parameter Generation</h2>
<p>It’s clearly not trivial to (i) find the attack paths that the adversaries exploited, and (ii) to determine the optimal parameters to exploit the attacks to the fullest. We therefore seek help from constrained optimization techniques to guide us towards optimal attack parameters.</p>
<div class="figure align-left">
<img alt="Parametrized DeFi optimizer" class="no-border" src="https://hackingdistributed.com/images/2020/flash-loan-framework.png" style="width: 700px;" />
<p class="caption">Parametrized DeFi optimizer.</p>
</div>
<p>To make use of constraint optimization, we first model different components that may engage in a DeFi attack. We quantitatively formalize every endpoint provided by DeFi platforms as a state transition function S’ = T(S,p) with the constraints C(S; p), where S is the given state, p are the parameters chosen by the adversary and S’ is the output state. The state can represent, for example, the adversarial balance or any internal status of the DeFi platform, while the constraints are set by the execution requirements of the Ethereum Virtual Machine (e.g. the Ether balance of an entity should never be a negative number) or the rules defined by the respective DeFi platform (e.g. a flash loan must be repaid before the transaction termination plus loan fees). Note that when quantifying profits, we ignore the loan interest/fee payments and Ethereum transaction fees, which are negligible in the present DeFi attacks. The constraints are enforced on the input parameters and output states to ensure that the optimizer yields (for the model) valid parameters. We refer to the paper for the full details.</p>
<div class="section" id="optimizing-the-pump-and-arbitrage-attack">
<h3>Optimizing the Pump and Arbitrage Attack</h3>
<p>To solve the constraints, we apply the Sequential Least Squares Programming (SLSQP) algorithm from <a class="reference external" href="https://www.scipy.org/">SciPy</a> and use the minimize function in the optimize package. Our program is evaluated on a Ubuntu 18.04.2 machine, 16 CPU cores and 32 GB RAM. We repeated our experiment a 1'000 times, the optimizer spent 6.1 ms on average converging to the optimum.</p>
<p>For the Pump and Arbitrage attack, the optimizer provides parameters that would yield a maximum revenue of 2,778.94 ETH, while in the original attack the parameters only yield 1,171.70 ETH. Our Figure above highlights the ideal amounts that should have been used in the attacks. Note, due to the ignorance of trading fees and precision differences, there is a minor discrepancy between the original attack revenue calculated with our model and the real revenue which is 1,193.69 ETH. This is a 829.5k USD gain over the attack that took place.</p>
</div>
<div class="section" id="optimizing-the-oracle-manipulation-attack">
<h3>Optimizing the Oracle Manipulation Attack</h3>
<p>We again execute our optimizer 1,000 times on the same Ubuntu 18.04.2 machine and find an average convergence time of 12.9 ms. The optimizer discovers a setting that results in 6,323.93 ETH profit for the adversary. This results in a gain of 1.1M USD instead of about 600k USD for the attack that took place.</p>
</div>
</div>
<div class="section" id="discussing-flash-loans-and-defi">
<h2>Discussing Flash Loans and DeFi</h2>
<p>The current generation of DeFi had developed organically, without much scrutiny when it comes to financial security; it, therefore, presents an interesting security challenge to confront. DeFi, on the one hand welcomes innovation and the advent of new protocols, such as MakerDAO, Compound, and Uniswap. On the other hand, despite a great deal of effort spent on trying to secure smart contacts, and to avoid various forms of market manipulation, etc., there has been little-to-no effort to secure entire protocols.</p>
<p>As such, DeFi protocols join the ecosystem, which leads to both exploits against protocols themselves as well as multi-step attacks that utilize several protocols (see above). In a certain poignant way, this highlights the fact that a DeFi, lacking a central authority that would enforce a strong security posture, is ultimately vulnerable to a multitude of attacks effectively by design. Flash loans are merely a mechanism that accelerates these attacks. It does so by requiring no collateral (except for the minor gas costs), which, in a certain way, democratizes the attack, opening this strategy to the masses. However, it is quite likely that there will be other mechanisms invented that will enable further, potentially even more devastating, attacks in the near future.</p>
<div class="section" id="responsible-disclosure">
<h3>Responsible disclosure</h3>
<p>It is somewhat unclear how to perform responsible disclosure within DeFi, given that the underlying vulnerability and victim are not always perfectly clear and that there is a lack of security standards to apply. We reached out to Aave, Kyber, and Uniswap to disclose the contents of this work.</p>
</div>
<div class="section" id="determining-what-is-malicious">
<h3>Determining what is malicious</h3>
<p>An interesting question remains whether we can qualify the use of flash loans, as clearly malicious (or clearly benign). We believe this is a difficult question to answer and prefer to withhold the value judgement. The two attacks described are clearly malicious: pump and arbitrate involves manipulating the WBTC/ETH price on Uniswap; the oracle manipulation attack involves price oracle by manipulatively lowering the price of ETH against sUSD on Kyber. However, the arbitrage mechanism in general is not malicious --- it is merely a consequence of the decentralized nature of the DeFi ecosystem, where many exchanges and DEXs are allowed to exist without much coordination with each other. As such, arbitrage will continue to exist as a phenomenon, with good and bad consequences.</p>
</div>
<div class="section" id="does-extra-capital-help">
<h3>Does extra capital help</h3>
<p>The main attraction of flash loans stems from them not requiring a collateral that needs to be raised. One can, however, wonder whether extra capital would make the attacks we focus on more potent and the ROI greater. Based on our results, extra collateral for the two attacks presented <em>would not</em> increase the ROI, as the liquidity constraints of the intermediate protocols do not allow for a higher impact.</p>
</div>
<div class="section" id="potential-defenses">
<h3>Potential defenses</h3>
<p>Here we discuss several potential defenses. However, we would be the first to admit that these are not foolproof and come with potential downsides that would significantly hamper normal interactions.</p>
<ul class="simple">
<li>Should DEX accept trades coming from flash loans?</li>
<li>Should DEX accept coins from an address if the previous block did not show those funds in the address?</li>
<li>Would introducing a delay may make sense, e.g. in governance voting, or price oracles?</li>
<li>When designing a DeFi protocol, a single transaction should be limited in its abilities: a DEX should not allow a single transaction triggering a slippage beyond 100%.</li>
</ul>
</div>
</div>
<div class="section" id="outlook">
<h2>Outlook</h2>
<p>In the future, we anticipate DeFi protocols eventually starting to comply with a higher standard of security testing, both within the protocol itself, as well as part of integration testing into the DeFi ecosystem. We believe that eventually, this may lead to some form of DeFi standards where it comes to financial security, similar to what is imposed on banks and other financial institutions in traditional centralized (government-controlled) finance.</p>
<p>We anticipate that either whole-system penetration testing or an analytical approach of modeling the space of possibilities like in this work are two ways to improve future DeFi protocols.</p>
</div>
https://hackingdistributed.com/2020/03/11/flash-loans/Libra: Succinct Zero-Knowledge Proofs with Optimal Prover Computation2020-02-12T10:00:00Z2020-02-12T10:00:00ZTiancheng Xie and Jiaheng Zhang and Yupeng Zhang and Charalampos Papamanthou and Dawn Song<p><strong>TL;DR</strong> Libra is a zero-knowledge proof protocol that achieves
extremely fast prover time and succinct proof size and verification
time. Not only does it have good complexity in terms of asymptotics, but
also its actual running time is well within the bounds of enabling
realistic applications. It can be applied in areas such as blockchain
technology and privacy-preserving smart contracts. It is currently being
implemented by Oasis Labs.</p>
<div class="section" id="introduction-and-motivation">
<h2>Introduction and Motivation</h2>
<p>Zero-knowledge proofs (ZKP) are cryptographic protocols between two
parties, a prover and a verifier, in which the prover wants to convince
the verifier about the validity of a statement without leaking any extra
information beyond the fact that the statement is true. For example, the
verifier could confirm that the prover computes some functions ‘F(w) =
y’ correctly even without knowing the input ‘w.’ Since they were first
introduced by Goldwasser et al.
[<a class="reference external" href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.419.8132&rep=rep1&type=pdf">1</a>],
ZKP protocols have evolved from pure theoretical constructs to practical
implementations. They have achieved proof sizes of just hundreds of
bytes and verification times of a few milliseconds, regardless of the
size of the statement being proved. Due to this successful transition to
practice, ZKP protocols have found numerous applications, not only in
the traditional computation delegation setting, but also in blockchain
settings such as providing privacy of transactions in deployed
cryptocurrencies (e.g., Zcash
[<a class="reference external" href="http://zerocash-project.org/media/pdf/zerocash-oakland2014.pdf">2</a>]).</p>
<p>And a ZKP protocol must meet three criteria: completeness, soundness,
zero knowledge. See below for an explanation.</p>
<div class="figure align-left">
<img alt="Completeness, soundness, and zero-knowledge requiremnets for zero knowledge proof." class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/zk-definition-requirements.png" style="width: 700px;" />
<p class="caption">An explanation of the formal security requirements for zero-knowledge proofs.</p>
</div>
<p>Despite such progress in practical implementations, ZKP protocols are
still notoriously hard to scale for large statements due to a
particularly high overhead on generating the proof. For most protocols,
this is primarily because the prover has to perform a large number of
cryptographic operations, such as exponentiation in an elliptic curve
group. And to make things worse, the asymptotic complexity of computing
the proof is typically more than linear, e.g., ‘O(C log C)’ or even ‘O(C
log^2 C),’ where ‘C’ is the size of the statement. Therefore designing
ZKP protocols that enjoy linear prover time as well as succinct proof
size and verification time is an open problem.</p>
<p><strong>Resolving this problem has significant practical implications. For
example, we could generate the proof for larger statements on blockchain
within acceptable time bound, which could also further extend the
privacy of transactions or smart contract.</strong></p>
<p><strong>Enter Libra (not affiliated with the Libra project launched by
Facebook)</strong></p>
<p>Originally proposed in a
<a class="reference external" href="https://eprint.iacr.org/2019/317.pdf">paper</a> from early 2019, Libra
solves the problem described above with a zero-knowledge proof protocol
that has three important properties:</p>
<ol class="arabic simple">
<li><em>Optimal prover time</em>: Libra only needs time that is linear in the
statement size to generate a proof.</li>
<li><em>Succinct verification time and proof size</em>: both of the proof size
and the verification time in Libra are logarithmic in the statement
size.</li>
<li><em>Universal trusted setup</em>: Libra only needs a one-time trusted setup
to generate the public parameters which can be used for all
statements to be proved, which explains the term “universal”.</li>
</ol>
<p>The underlying protocols of Libra are an interactive proof protocol
proposed by Goldwasser, Kalai, and Rothblum, in
[<a class="reference external" href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/2008-DelegatingComputation.pdf">5</a>]
(referred to as GKR protocol), and the verifiable polynomial delegation
(VPD) scheme proposed by Zhang et al. in
[<a class="reference external" href="https://web.eecs.umich.edu/~genkin/papers/vsql.pdf">6</a>]. It comes
with one-time trusted setup (not per-statement trusted setup) that
depends only on the size of the input (witness) to the statement that is
being proved.</p>
<p>In the original GKR protocol, the prover could only generate a proof of
the statement without satisfying the zero-knowledge property. That means
the verifier could learn secret information of the prover from the proof
itself, which we want to avoid. In addition, in the GKR protocol, the
computation of the statement is based on the arithmetic layered
circuit(layered circuit with only addition and multiplication gates) and
the time of the prover to generate the proof is polynomial in the number
of gates in this circuit. This is slow if the circuit size is large.</p>
<p><strong>Libra’s contribution is solving these two problems in the GKR protocol
and could be summarized as follows:</strong></p>
<ol class="arabic simple">
<li><strong>GKR protocol with linear prover time.</strong> Libra features a new
linear-time algorithm to generate a GKR proof. Our new algorithm does
not require any specific structure in the circuit and our result
<em>subsumes all existing improvements on the GKR prover</em> which assume
special circuit structures, such as regular circuits in
[<a class="reference external" href="https://eprint.iacr.org/2013/351.pdf">7</a>], data-parallel
circuits in [<a class="reference external" href="https://eprint.iacr.org/2013/351.pdf">7</a>,
<a class="reference external" href="http://delivery.acm.org/10.1145/3140000/3133984/p2071-wahby.pdf?ip=99.145.194.16&id=3133984&acc=CHORUS&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&__acm__=1565216461_51260ff16a050332f41c4eb86ac5b02f">8</a>],
circuits with different sub-copies in
[<a class="reference external" href="https://web.eecs.umich.edu/~genkin/papers/vram.pdf">9</a>].</li>
<li><strong>An efficient approach to turn Libra into zero-knowledge.</strong> We show
a way to mask the responses of our new linear-time prover with small
random polynomials so as to meet the zero-knowledge property. This
new zero-knowledge variant of the protocol introduces <em>minimal
overhead</em> on the verification time compared to the original
(unmasked) GKR protocol.</li>
</ol>
</div>
<div class="section" id="comparison-with-existing-zkp-protocols">
<h2>Comparison with existing ZKP protocols</h2>
<p>Table 1 shows a detailed comparison between the asymptotic complexity of
Libra and the existing ZKP protocols. A first observation is that Libra
has the best prover time among all existing protocols, indicated as row
<em>P</em> on the table. In terms of asymptotics, Libra is the only protocol
that satisfies all of the following properties simultaneously: linear
prover time, succinct verification, and succinct proof size structured
circuits. The only other protocol with linear prover time is
Bulletproofs whose verification time is linear, even for structured
circuits. In the practical front, Bulletproofs’ prover time and verifier
time are high due to the large number of cryptographic operations
required for every gate of the circuit.</p>
<p>The proof and verification of Libra are also competitive to other
protocols. In asymptotic terms, our proof size is only larger than
libSNARK and Bulletproofs, and our verification is slower than libSNARK
and libSTARK. Compared to Hyrax, which is also based on similar
techniques with our work, Libra improves the performance in all aspects
with one-time trusted setup.</p>
<div class="figure align-left">
<img alt="Table comparing Libra to existing ZKP protocols." class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/libra-comparison-table.png" style="width: 700px;" />
<p class="caption">Comparison of Libra to existing ZKP protocols, where (G, P, V, |π|) denote the trusted setup algorithm, the prover algorithm, the verification algorithm and the proof size respectively. Also, C is the size of the circuit with depth d, and n is the size of its input.</p>
</div>
</div>
<div class="section" id="implementation-and-evaluation">
<h2>Implementation and evaluation</h2>
<p><strong>Software.</strong> We implement Libra, our new zero-knowledge proof protocol,
in C++. Our protocol provides an interface that takes as input a generic
layered arithmetic circuit and generates a zero-knowledge proof
according to the circuit and the input of the circuit. We support a
class of 512 bit unsigned integers that improve on the performance of
the GMP library in specific cases and use it together with GMP for large
numbers and field arithmetic. We use the popular cryptographic library
“ate-pairing” on a 254-bit elliptic curve for the bilinear map used in
zero-knowledge VPD. We have released the code as an open-source system
(<a class="reference external" href="https://github.com/sunblaze-ucb/Libra">https://github.com/sunblaze-ucb/Libra</a>).</p>
<p><strong>Hardware.</strong> We run all of the experiments on Amazon EC2 c5.9xlarge
instances with 70GB of RAM and Intel Xeon platinum 8124m CPU with 3GHz
virtual core. Our current implementation is not parallelized and we only
use a single CPU core in the experiments, so we hypothesize that one can
further improve the efficiency of the reported numbers. We report the
average running time of 10 executions.</p>
<p><strong>Methodology and benchmarks.</strong> We compare our GKR protocol to these
variants on the benchmarks below:</p>
<ol class="arabic simple">
<li><em>Matrix multiplication</em>: Prover P proves to the verifier V that it
knows two matrices whose product equals to a publicly known matrix.
We evaluate on different matrix size from 4×4 to 256×256.</li>
<li><em>Image scaling</em>: P proves to V that it computes a low-resolution
image by scaling from a high-resolution image correctly using the
classic Lanczos resampling method. We evaluate by fixing the window
size and increase the image size from 112x112 to 1072x1072.</li>
<li><em>Merkle tree</em>: P proves to V that it knows the value of the leaves of
a Merkle tree that computes to a public root value. We use SHA-256
for the hash function. And we increase the number of leaves from 16
to 256 in experiments.</li>
</ol>
<p>We report the prover time, proof size and verification time in Figure 1.</p>
<div class="figure align-left">
<img alt="Graph comparing Libra to existing ZKP protocols." class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/libra-comparison-graph.png" style="width: 700px;" />
</div>
<p><strong>Which brings us to the conclusion. As shown in Figure 1(a)(b)(c), the prover in Libra is the fastest among all systems in all three benchmarks we tested. Figure 1(d)(e)(f) show the verification time. Our verifier is much slower than libSNARK and libSTARK, which runs in 1.8ms and 28-44ms respectively in all the benchmarks. Other than these two systems, the verification time of Libra is faster, as it grows sub-linearly with the circuit size. We report the proof size in Figure 1(g)(h)(i). Our proof size is much bigger than libSNARK and Bulletproof but it is better than Aurora, Hyrax, Ligero and libSTARK.</strong></p>
<p><strong>Immediate Implementation.</strong> Tiancheng Xie and Jiaheng Zhang interned
at <a class="reference external" href="https://www.oasislabs.com/">Oasis Labs</a> in summer 2019 and worked
to implement Libra. The Oasis Team has plans to further develop it in
the future.</p>
</div>
<div class="section" id="the-next-step-after-libra-introducing-virgo">
<h2>The Next Step after Libra: Introducing Virgo!</h2>
<p>Based on the exciting results of Libra we set to solve one vital
limitation of our proposal, namely “the trusted setup”. In our new
follow up project called “Virgo”, we propose a transparent ZKP protocol
with even better prover time and verification time. The proof size
becomes larger, but it is reasonable and still works well in practice.
To be specific, the prover time of Virgo is O(C +n log n) while both of
the proof size and the verification time are O(d log C + log^2 n) for a
d-depth circuit with n inputs and C gates. Our scheme <em>only uses
lightweight cryptographic primitives</em> such as random oracles and is
post-quantum secure. <em>Our implementation of the protocol, Virgo, shows
that it only takes 50 seconds to generate a circuit computing a Merkle
tree with 256 leaves, at least an order of magnitude faster than
existing transparent schemes</em>. The verification time is 50ms, and the
proof size is 253KB, both competitive to existing transparent
zero-knowledge proof protocols.</p>
<p><strong>Summary.</strong> We have introduced Libra, a zero-knowledge proof protocol
achieves extremely fast prover time and succinct proof size and
verification time. Not only does it have good complexity in terms of
asymptotics, but also its actual running time is well within the bounds
of enabling realistic applications. Our cryptographic technique can be
applied in other application areas such as blockchain technology and
privacy-preserving smart contracts. To learn more about Libra, you can
read our full paper <a class="reference external" href="https://eprint.iacr.org/2019/317.pdf">here</a>. You
can view the code <a class="reference external" href="https://github.com/sunblaze-ucb/Libra">here</a>.</p>
<p>We thank Sarah Allen, IC3 Community Manager, for her help in compiling
this text.</p>
</div>
https://hackingdistributed.com/2020/02/12/libra/Liberating web data using DECO, a privacy-preserving oracle protocol2019-09-03T14:00:00Z2019-09-03T14:00:00ZFan Zhang and Steven Goldfeder and Ari Juels<p>Whether you are initiating a transfer on your bank's website, posting content on your social media account, or even viewing a public website including this very blog post, TLS is at work to make sure that all data sent between you and the web server is authentic and confidentially transmitted. But while TLS is excellent at convincing the protocol participants (i.e., <em>you</em> and the <em>web server</em> in the above example) that all data is authentic and secure, it turns out that TLS is useless if you want to convince <em>someone else</em> that you sent or received some particular data to or from a website. So while you can log into your bank account and check your balance, there's no way to use TLS to convince a third party that your bank account has a particular balance. As we will see, this is a serious limitation, and one which we address in a <a class="reference external" href="https://www.deco.works/paper.html">new paper</a>.</p>
<div class="section" id="challenges">
<h2>Challenges</h2>
<p>An <em>oracle</em> is a service that provides and vouches for the authenticity of data. In systems such as smart contract platforms that are unable to directly access the web, oracles serve as a critical link that enables these systems to make decisions based on real world data. When an oracle is tasked with providing publicly available data (e.g., stock prices), its task is relatively straightforward. But consider an oracle that is asked to provide data about a specific user: confirm that Alice, according to an online government portal, is above a certain age or that Bob's bank account balance is above a certain threshold. For these examples, the oracle needs to vouch for data from a TLS session that is only available when logged in as a particular user. The question becomes then: Considering TLS's limitations, how do you convince the oracle of the correctness and authenticity of this data?</p>
<p>You can of course give the oracle your credentials (e.g., your banking username and password) and have them log in to check your balance, but this would require a high level of trust in the oracle. For security reasons, you definitely don't want the oracle to have full <em>write</em> access to your bank account, but for privacy reasons, you probably don't even want the oracle to have full <em>read</em> access to your account. All that the oracle needs to know, in the banking example, is your account balance, and the critical question is: Can we allow the oracle to validate this information without giving it access to any additional data? DECO answers this question in the affirmative.</p>
</div>
<div class="section" id="introducing-deco">
<h2>Introducing DECO</h2>
<p>DECO is a novel privacy-preserving oracle protocol, created by students and faculty at <a class="reference external" href="https://www.initc3.org">IC3</a>.</p>
<div class="figure align-right">
<img alt="The DECO logo." class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/deco.png" style="width: 300px;" />
<p class="caption">The DECO logo: three hands holding each other, symbolizing the main component of the DECO protocol: the three-party handshake.</p>
</div>
<p>DECO is not the first to introduce this problem, and indeed other solutions have been proposed. But DECO <strong>is</strong> the first solution to work with modern versions of TLS that does not require the use of trusted hardware or active participation from the web server that provides the data. In DECO, <em>anyone</em> can serve as an oracle for <em>any</em> website, and strong privacy from the oracle is guaranteed. Where previous solutions have used trusted hardware, DECO relies on highly optimized <a class="reference external" href="https://en.wikipedia.org/wiki/Secure_multi-party_computation">multi-party computation</a> and <a class="reference external" href="https://en.wikipedia.org/wiki/Zero-knowledge_proof">zero-knowledge proving techniques</a> that allow the oracle to participate in a TLS session while only learning the exact data that it's trying to verify.</p>
<p>DECO has a variety of other applications as well. It turns out that even for publicly available data, DECO is useful, as it allows smart contracts to use an oracle service without even revealing to the oracle the rules that the smart contract is enforcing. In other realms, DECO is also useful for users who want to monetize their own data (and therefore prove that they are indeed providing correct data) without giving away anything but the data that they are selling.</p>
<p>In those cases where trusted hardware isn't available or can't be used, we think DECO is a leap forward for oracle technologies. We encourage you to read our <a class="reference external" href="https://www.deco.works/paper.html">new paper</a> for full technical details, and check out our <a class="reference external" href="https://www.deco.works">website</a>.</p>
<p>We're also excited about <a class="reference external" href="https://chain.link">Chainlink</a>'s plans for an initial PoC of DECO, with a focus on decentralized finance applications such as <a class="reference external" href="https://chain.link/mixicles">Mixicles</a>.</p>
<p>Stay tuned!</p>
</div>
https://hackingdistributed.com/2019/09/03/DECO/Ostraka: Blockchain Scaling by Node Sharding2019-05-27T10:00:00Z2019-05-27T10:00:00ZAlex Manuskin and Ittay Eyal<div class="figure align-right">
<img alt="https://hackingdistributed.com/images/2019-bitcoin/lachish2.jpg" class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/lachish2.jpg" style="width: 200px;" />
<p class="caption">The Lachish letters. British Museum CC BY-NC-SA 4.0</p>
</div>
<p><em>Ostraka (singular: Ostrakon) are pieces of ancient broken vessels and clay pottery.
Their abundance in the ancient world made them a cheap and popular choice for writing notes and receipts, instead of the more expensive papyrus.
The text was often scratched into the pot-sherd, and many were preserved to this day.</em></p>
<div class="section" id="can-we-make-a-decentralized-system-with-cash-credit-card-scale">
<h2>Can we make a decentralized system with cash/Credit card scale?</h2>
<p>There is a lot of interest in building high-throughput blockchains.
These are useful in decentralized open settings (<a class="reference external" href="https://www.bitcoincash.org/roadmap.html">BCH</a>, <a class="reference external" href="https://github.com/ethereum/wiki/wiki/Sharding-roadmap">Ethereum</a>) as well as industry (<a class="reference external" href="https://www.hyperledger.org/about">Hyper ledger</a>) and government applications (<a class="reference external" href="https://www.riksbank.se/en-gb/payments--cash/e-krona/">central-bank crypto-currency</a>).
We present here a new approach to advance towards this goal, but first, let's take a look at the state of the art.</p>
<div class="section" id="background">
<h3>Background</h3>
<p>Since Nakamoto's Bitcoin, the community has made significant progress in scaling blockchains.
New generations of protocols solve the consensus bottleneck.
Starting from <a class="reference external" href="http://hackingdistributed.com/2015/10/14/bitcoin-ng/">Bitcoin-NG</a> (implemented, e.g., in <a class="reference external" href="https://wavesplatform.com/products-blockchain">Waves-NG</a>, <a class="reference external" href="https://github.com/aeternity/protocol/blob/master/consensus/bitcoin-ng.md">Aeternity</a>, <a class="reference external" href="https://lto.network/proof-engine.html">LTO</a>), followed by <a class="reference external" href="http://hackingdistributed.com/2016/08/04/byzcoin">Byzcoin</a>, <a class="reference external" href="https://medium.com/thundercore/proof-of-stake-and-the-history-of-distributed-consensus-part-2-thunderella-d885899fceb9">Thunderella</a>; non-serial solutions such as <a class="reference external" href="https://phantom.org/">Phantom</a>; Proof-of-stake protocols such as <a class="reference external" href="https://www.cardano.org/en/ouroboros/">Ouroboros</a> and <a class="reference external" href="https://www.algorand.com/resources/white-papers">Algorand</a>; as well as other approaches such as <a class="reference external" href="https://assets.ctfassets.net/xwo28v1qbyr0/CCMPhMqQM0kKMKyyiq0sE/d55ade6e3ea5294f3fdb913647630246/avalanche-consensus.pdf">Avalanche</a>.
That's just to name a few.
Transactions can, therefore, be processed at fairly high rates -- now limited by the nodes of the system: Since each node processes all transactions, the system's rate is bounded by that of a single node.</p>
<p>One solution to blockchain scaling is layer-two solutions, namely payment channels, most prominently the <a class="reference external" href="http://www.lightning.network/">Lightning Network</a>, but also Duplex Micro-payment Channels, <a class="reference external" href="http://hackingdistributed.com/2016/12/22/scaling-bitcoin-with-secure-hardware/">Teechain</a> and <a class="reference external" href="https://plasma.io/">Plasma</a>.
However, they require either synchrony assumptions (Lightning, Plasma) or trusted hardware (Teechain).
They also require layer-one resources, as opening and closing a channel needs to happen on the blockchain.</p>
<p>Another proposed direction to assure participating nodes are able to handle the required capacity is known as <em>sharding</em>.
Sharding is the process of partitioning the system nodes into groups where each group processes only a subset of the transactions.
<a class="reference external" href="https://www.comp.nus.edu.sg/~prateeks/papers/Elastico.pdf">Elastico</a>, <a class="reference external" href="https://eprint.iacr.org/2017/406.pdf">Omniledger</a>, <a class="reference external" href="https://eprint.iacr.org/2018/460.pdf">Rapidchain</a>, <a class="reference external" href="https://github.com/ethereum/wiki/wiki/Sharding-FAQ">Ethereum 2.0</a> and others apply this sharding approach, allowing commodity laptops to be nodes.
However, its security relies on the assumption each shard has sufficiently many nodes: sharding degree is decided at the system level, and if a shard is not allocated enough nodes then it is more vulnerable to attacks.
In addition, cross-shard transactions are a challenge as transaction often affect multiple shards.</p>
</div>
<div class="section" id="ostraka">
<h3>Ostraka</h3>
<p>Ostraka is a new blockchain processing architecture that overcomes these challenges by taking a different approach.
We eliminate the bottleneck of transaction processing by scaling the bandwidth, processing, and storage of the nodes.
In practice, we scale by implementing each node with multiple machines that operate together as a single node.
Unlike previous sharding solutions, Ostraka is sharding the node rather than the system.
The concept of the node-sharding has been discussed in the community,
(e.g., this Bitcoin Cash community <a class="reference external" href="https://medium.com/@Bitcoin_ABC/sharding-bitcoin-cash-35d46b55ecfb">blog post</a>), sometimes called local sharding.
We design the full architecture and provide measurements and analysis, showing it can be a viable prospect for scaling blockchains into the future.</p>
<p>Note that the implication of the Ostraka architecture is that full nodes are more expensive to operate.
Hence, they are likely to be run by larger actors with special interests such as miners, payment processors, exchanges, and research institutes.
Probably law enforcement agencies as well.
For such larger players, the added overhead of operating a larger node is not significant, think of a 10-30 machine rack.
Smaller operators such as hobbyists would have to suffice with a light-client protocol, however, this is already a necessity for the majority of clients who operate a wallet on a mobile device.</p>
<p>We outline here the architecture of Ostraka, present benchmark results and discuss security.
For more details do check out our <a class="reference external" href="arxiv">technical report</a>.</p>
</div>
<div class="section" id="a-distributed-node-architecture">
<h3>A distributed node architecture</h3>
<p>Ostraka operates in the UTXO model, presented in Nakamoto's seminal Bitcoin paper (see <a class="reference external" href="https://en.wikipedia.org/wiki/Unspent_transaction_output">here</a> for background on the UTXO model).
Blockchain nodes using the UTXO model maintain three data structures:</p>
<ul class="simple">
<li>The blockchain itself, each block comprising a set of transactions,</li>
<li>The set of unspent transaction outputs, or UTXO set -- this embodies the current state of the system.</li>
<li>The mempool -- all transactions sent to the system but not yet placed in blocks.</li>
</ul>
<div class="figure">
<img alt="Split node configuration" class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/Single_Node_Arrow.png" style="width: 600px;" />
</div>
<p>When sharding, Ostraka splits each of the three data structures among the servers of a node.
We shard by transaction ID, namely the unique hash of its contents.
This is the same hash used by transactions inputs to reference their source outputs.
Therefore, we can identify which shard is responsible for which transactions, e.g., by taking the most significant bits (MSB) of the transaction hash, and know which shard to access when searching for a particular UTXO.
Actually, sharding by MSB makes the system vulnerable and instead each node reshuffles its transactions.
We further elaborate on security aspects below.</p>
<p>A shard assigned to a transaction is responsible for storing it, verifying its correctness and storing its unspent outputs.
As each transaction hash is uniquely mapped to a single shard, in a node of <cite>K</cite> shards, each shard is responsible for about <cite>1/K</cite> of the transactions.</p>
<p>The operation of the shards is orchestrated by a single server called the <cite>coordinator</cite>.
The coordinator is in charge of all operations that are not directly affected by block size (<cite>O(1)</cite> operations).
It is responsible for finding other peers on the network, tracking the head of the blockchain, etc.
The coordinator in a node is connected to the coordinator of its peer nodes, and each shard in a node is connected to the shards at each of its peer nodes.
This way, shards can communicate directly with one another, without passing all information through the coordinator.</p>
<div class="figure align-right">
<img alt="Each shard requests missing outputs" class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/Validate3-1.png" style="width: 300px;" />
</div>
<p>To illustrate the system's operation we take for example two nodes, <cite>A</cite> and <cite>B</cite>, each with 2 shards (<cite>A0</cite>, <cite>A1</cite>, and <cite>B0</cite>, <cite>B1</cite>).
When receiving a new block, the coordinator of node B first receives from its peer (A's coordinator) the new block's header.
It validates the correctness of the header metadata (PoW, time, etc.) and notifies its shards of the new block.
Now, the shards <cite>B1</cite> and <cite>B2</cite> of <cite>B</cite> download from their counterparts <cite>A1</cite> and <cite>A2</cite> the appropriate transactions.</p>
<p>Once these are received, full block validation can occur.
Each of the shards checks each transaction in its list to see where its inputs reside.
In a node with <cite>K</cite> shards, approximately <cite>(K-1)/K</cite> of the referenced outputs required to validate the transactions are stored on sibling shards (B's shards).
Thus, each shard requests the missing outputs from its siblings and collects them.</p>
<div class="figure align-right">
<img alt="Each shard requests missing outputs" class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/Validate5-1.png" style="width: 300px;" />
</div>
<p>Next, each shard can validate its transactions, while monitoring for double spends due to local or remote transactions.
Each shard is responsible for different transactions, and any one of them can detect a double spend if an output is consumed twice, that is, requested by more than one sibling.
If some shard detects a double spend or an attempt to spend a non-existent output, it reports to the coordinator that the block is invalid.
Otherwise, once all shards report their transactions as correct, the coordinator marks the block validated and can send it to the next peer.</p>
</div>
<div class="section" id="evaluation">
<h3>Evaluation</h3>
<p>Both a unified single-laptop blockchain node and an Ostraka 32-server node can process blocks of any size, it is only a question of how long it will take.
Long validation time per node means the whole system operates at lower throughput (e.g., more forks in a Nakamoto-like blockchain).
The interesting question is therefore how quickly processing is done, or how much processing you can do within a bounded amount of time.</p>
<p>For the benchmark we present here, we chose a time target of 10 seconds -- block processing should not take longer than that.
We chose this limit arbitrarily, this way we can compare the system apples-to-apples with the changing number of shards as the only parameter.
Designers of an operational chain should choose a target time according to their specific considerations.</p>
<p>We have implemented a prototype of Ostraka, based on the Go Bitcoin/Bitcoin-Cash client <a class="reference external" href="https://github.com/btcsuite/btcd">btcd</a>, <a class="reference external" href="https://github.com/gcash/bchd">bchd</a>.
We measure the time it takes for the receiving node to process a block with an increasing number of transactions.</p>
<div class="figure align-right">
<img alt="https://hackingdistributed.com/images/2019-bitcoin/varying_shards-1.png" class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/varying_shards-1.png" style="width: 400px;" />
</div>
<p>As we add more shards, Ostraka architecture demonstrates linear scaling, that is, by doubling the number of machines in a node it can process twice as many transactions per second, despite dependencies among them.
This is the best we could hope for.</p>
</div>
</div>
<div class="section" id="what-about-simply-adding-more-cores">
<h2>What about simply adding more cores</h2>
<p>Current UTXO-based node implementations are already mostly parallelized.
However, they remain limited by the hardware available by CPU vendors and system memory available on a single machine.
These are limited by Moore's law, that has been <a class="reference external" href="https://spectrum.ieee.org/nanoclast/semiconductors/devices/what-globalfoundries-retreat-really-means">winding down</a> in the past years.
By using multiple machines, we can break away from these constraints, and use as many machines as needed.</p>
<p>Then there are additional benefits.
We are not distributing processing effort only, but data structures as well.
This speeds up I/O operations and data access.
With only a partial UTXO-set managed by each shard, it is easier to keep all of it in fast system memory, instead of slow storage devices.
Database lookup times are also accelerated, as there are now fewer entries to search through.</p>
<p>The following image of our measurement shows how we achieve nearly perfect scaling when doubling the number of shard 2X, 4X, and 8X.
Simply doubling the number of cores by the same factor achieves poorer results, and is, of course, limited by the number of cores on a CPU.</p>
<div class="figure">
<img alt="https://hackingdistributed.com/images/2019-bitcoin/shardsVsCores-error-1.png" class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/shardsVsCores-error-1.png" style="width: 600px;" />
</div>
<p>We can, of course, use multiple servers with multiple cores to achieve even better performance.
By using a node of 64 shards, we measured processing capacity of 400,000 transactions per second.</p>
</div>
<div class="section" id="look-like-there-is-a-lot-of-communication-does-it-limit-performance">
<h2>Look like there is a lot of communication, does it limit performance?</h2>
<p>At first glance, it might seem that adding more shards might reduce performance, as more messages are required for cross-shard communication.
The latter is true, but as the number of shards grows, the number of transaction outputs each shard is responsible for actually decreases.
On average, given a node with <cite>K</cite> shards, each is responsible for <cite>1/K</cite> transactions and transaction outputs.
It must also request an average of <cite>(K-1)/K</cite> of outputs to validate a block.
When processing a block, it requests <cite>((K-1)/K)(1/K)</cite> outputs, (or <cite>O(1/K)</cite>).
Thus, although the number of messages each shard needs to send increases, the overall number of bytes each shard sends (and receives) also decreases linearly with the number of shards.
In addition, since shards are in close physical proximity and connected by a high-bandwidth network, inter-shard communication is cheap.</p>
<div class="section" id="scaling-without-undermining-security">
<h3>Scaling without undermining security</h3>
<p>Ostraka is a node architecture that can be incorporated into existing blockchain protocols.
We thus need to make sure that the change to the node architecture does not harm the underlying protocol.
To do that, we first show in our report that the unified architecture and Ostraka are logically indistinguishable.
Specifically, we show that despite changes to the node structure and validation details, an Ostraka node performs the same steps of block validation as a unified node does.</p>
<p>Preventing denial of service is trickier.
For an Ostraka node to complete block validation quickly, all shards must work in parallel to accomplish the job.
If it takes too long for one of the shards to validate its share, the entire process is slowed down and the benefits of the distributed architecture are forfeited.
To accomplish this, an attacker could try to create a block where all transactions are mapped to a single shard.</p>
<p>To prevent such attacks, Ostraka nodes shuffle their transactions.
Each node picks a unique <cite>salt</cite> value and uses it to locally reshuffle the transaction among its shards.
Technically, instead of sharding by directly by transaction-ID, the transaction hash is hashed again with the salt to form the sharding index.
By creating a unique transaction distribution on each node, the difficulty of an attacker to create such a malicious block drops exponentially with the number of nodes it attacks.</p>
</div>
<div class="section" id="conclusion">
<h3>Conclusion</h3>
<p>Blockchains provide an open and decentralized payment system accessible to anyone with an internet connection.
There is wide interest in making the technology must be able to handle the demand of anyone who wants to participate.</p>
<p>Ostraka removes the bottlenecks of the individual node of a blockchain system, without undermining the system's security.
It is not a new consensus protocol but rather an architecture to enable protocols to reach their full potential.
We see Ostraka as another piece of the puzzle for building the next generation of blockchains, capable of handling the demand for a practical decentralized global payment system.</p>
</div>
</div>
https://hackingdistributed.com/2019/05/27/ostraka/On Stablecoins and Beauty Pageants2019-05-07T09:30:00Z2019-05-07T09:30:00ZAmani Moin and Kevin Sekniqi<div class="figure align-right">
<img alt="Ms South Carolina aka Like Such As Like Such As" class="no-border" src="https://hackingdistributed.com/images/2019-bitcoin/like-such-as.jpg" style="width: 300px;" />
<p class="caption">Beauty pageants are not a great way to get coherent answers to hard questions. Or, like, such as, easy ones.</p>
</div>
<p>In this post, we identify a problem that plagues many stablecoin implementations: incorporating price information.</p>
<p>Stablecoins have recently emerged as a new class of crypto-assets designed to have low volatility relative to some other asset, such as the US dollar. Usually, this is done by pegging the value of the cryptocurrency. In the case of a peg to the USD, the cryptocurrency is meant to trade at a price at or near $1. This typically involves some algorithmic processes that depend on how the current price of the coin differs from the targeted peg.</p>
<p>But how does the contract “know” what the price is? Some current implementations depend on an external price feed, known as an oracle. But oracles are considered undesirable because it’s difficult to know which feeds can be trusted, and selecting a set of trusted feeds introduces centralization. This has prompted many researchers to explore solutions where users vote on the current price.</p>
<p>We argue that approaches based on users voting on the current price, used commonly in various algorithmic stablecoin designs, is broken.</p>
<div class="section" id="background">
<h2>Background</h2>
<p>In the proposed implementations, currency holders vote on the current price of the stablecoin. Stablecoin prices are then modulated by adjusting the supply of the coin when it moves away from $1. Typically, when the price of the stablecoin is too low, a secondary coin is auctioned and the proceeds are burned to contract the supply and raise the price. The secondary token has value because when the price is too high, coins are minted to the holders of the secondary token.</p>
<p>This creates problematic incentives when it comes to voting. Suppose the price of the currency is trading above $1. Participants could dutifully report the truth, and trigger the mechanism that dilutes the coin to reduce its price. But this would result in a net loss for them. Instead, they have an incentive to report a price that is lower than the truth, so there is less of the currency put into circulation. It is in the best interest of the participants to falsely claim that the price is still $1, or even lower. Such false reporting will lead to a short-term gain for the current holders, since no new coins will be printed, enabling the current holders to liquidate their coins above the targeted peg value. In the longer term, false reporting might lead to an unstable dynamic, where participants report a false value and drive up the prices, only to be followed by people realizing that there is a bubble and deciding to dump their coins. Since such a dynamic is obviously not desirable for a stablecoin, existing coins have proposed a simple mechanism to discourage this kind of gaming of the system.</p>
<p>The predominant solution, proposed for both Basis and Carbon, is to slash the funds of anyone who votes below the 25th or above the 75th percentile. On the surface, this may seem like a good way to encourage truth telling, but actually reinforces the incentives to game the system. A rational self interested actor will not necessarily report the truth; instead, they will report whatever they think everyone else will say is the true value, because they don’t want to be penalized. This leads to a dynamic known as a <a class="reference external" href="https://en.wikipedia.org/wiki/Keynesian_beauty_contest">Keynesian beauty pageant</a>.</p>
<p>Suppose the objective true price is $1.05. As discussed above, holders of the token have an incentive to not truthfully report the price of the coin, because it will result in them being diluted. Everyone who realizes this should prefer to report a lower price. Usually, it is difficult for many independent actors to coordinate on an untruthful equilibrium. However, since this is a coin designed to trade at the price of $1 this serves as a second equilibrium that is easy to coordinate on. Furthermore, since the coin is designed to trade at $1, lazy voters should choose $1 as their response because it should always be in the vicinity of the truth. Once people realize that other people will act according to these incentives, they, too, should report an incorrect price to avoid being slashed. This is especially the case if there is a coordinating mechanism across enough users, like social media. If someone who has a lot of followers and holds a lot of the stablecoin says they will vote for an incorrect price, it’s irrational for a smaller holder to vote otherwise.</p>
<p>Exacerbating this problem is that crypto markets are so volatile that even if you could somehow elicit truthful answers from everyone (so everyone reported what they really thought the exchange rate is instead of voting strategically) people still might report the wrong price. People would most likely consult some price feed and exchange that may have stale prices or invented volume, and use that as the price. In this case, even with everyone voting as accurately as they can, this is strictly worse than using a price feed because there will be some lag between when prices appear on a feed and when users vote.</p>
<p>Wisdom of the crowds is a sensible decision making paradigm for applications that require pooled public knowledge. However, as is currently being done, crowd-sourcing of pricing data feeds is fundamentally broken for stablecoins because the incentives are misaligned. Stablecoins that rely on this fundamental mechanism should be considered broken.</p>
</div>
https://hackingdistributed.com/2019/05/07/stablecoins-and-beauty-pageants/Decentralize Your Secrets with CHURP2019-04-05T09:00:00Z2019-04-05T09:00:00ZDeepak Maram and Fan Zhang and Ari Juels<p>At the heart of decentralized systems today is a demoralizing irony. Vast resources---intellect, equipment, and energy---go into avoiding centralized control and creating "trustless" systems like Bitcoin.
But hapless users then defeat the whole purpose of these systems by handing over their private keys to <strong>centralized</strong> entities. Or worse still, they lose their keys, sending about <strong>$14 billion</strong> in Bitcoin into a black hole (according to an <a class="reference external" href="http://fortune.com/2017/11/25/lost-bitcoins/">estimate</a> made two years ago).</p>
<p>But who can blame them? Even experts find key management hard. That's why, decades after its invention, public-key cryptography is rarely used for things as simple as e-mail encryption.</p>
<p>It's also why the hapless users of cryptocurrency in question aren't "they." They're also "we." Even at IC3, some of us shamefacedly use centralized exchanges instead of managing our own keys.</p>
<p>Wouldn't it be nice if <em>true decentralization</em> were within the grasp of ordinary users, if there were a system that: (1) made user key management easier and (2) was itself actually decentralized? And, even better, if it could: (3) manage keys for entities that can't store secrets, such as smart contracts?</p>
<p>These are the goals of <a class="reference external" href="https://www.churp.io">CHURP</a> (CHUrn-Robust Proactivization), a new system we've developed at IC3 and presented in a <a class="reference external" href="https://eprint.iacr.org/2019/017.pdf">paper</a>. CHURP is an <a class="reference external" href="https://github.com/CHURPTeam/CHURP">open-source</a> project already in plans for further development and adoption at <a class="reference external" href="https://www.oasislabs.com/">Oasis Labs</a>, and we hope to see it used in many other places.</p>
<div class="section" id="committees-here-today-gone-tomorrow">
<h2>Committees: Here today, gone tomorrow</h2>
<p>Decentralized key management, like single-user key management, is easy in principle. A <em>committee</em> of <em>n</em> nodes can hold a private key <em>SK</em> that is distributed using <a class="reference external" href="https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing">(t,n)-secret sharing</a>.
An adversary must then corrupt <em>t+1</em> nodes in order to steal <em>SK</em>. This creates a strong obstacle to compromise.</p>
<p>Any <em>t+1</em> nodes can act in concert to access <em>SK</em>, ensuring high uptime. In fact, by means of <em>threshold cryptography</em>, nodes can perform operations with <em>SK</em> using individual shares, never explicitly reassembling and exposing <em>SK</em>.
The figure below shows a (2,5)-secret sharing. Each of the five nodes (A1, A2, ..., A5) has a share (the small blue square) of the secret (the orange oval). Any three nodes can jointly compute the secret. The idea is similar to <em>multisig</em> addresses in cryptocurrency.</p>
<div class="figure">
<img alt="(2, 5)-secret sharing" class="no-border" src="https://hackingdistributed.com/images/2019-04-05-CHURP/churp1.png" style="width: 400px;" />
<p class="caption">An example of (2, 5)-Shamir secret sharing.</p>
</div>
<p>A number of decentralized systems, e.g., Enigma, Calypso and Coconut make use of such key management via committees. It's a compelling option, but there are some lurking problems.</p>
<p>The first problem is the risk of <em>mobile adversaries</em>. It may be hard for an adversary to compromise <em>t+1</em> nodes in a short space of time.
If a static set of nodes stores <em>SK</em> indefinitely, though, an adversary can attack nodes <em>gradually</em>, ultimately corrupting the <em>t+1</em> it needs to learn <em>SK</em>.</p>
<p>The classical countermeasure to mobile adversaries, devised in the 1990s, is known as <em>proactivization</em>. Nodes periodically <em>refresh</em> their shares, i.e., perform a fresh secret-sharing of <em>SK</em>. If proactivization happens on, say, a daily basis, then the shares that an adversary compromised yesterday no longer count today. In our (2,5)-secret sharing example, if the adversary compromised 2 shares yesterday and 2 today, despite having 4 shares, she can't reassemble <em>SK</em>. Yesterday's shares aren't compatible with today's.
Proactivization forces an adversary not just to compromise <em>t</em> nodes, but to do so quickly, before a refresh happens.</p>
<p>There's a second big problem, though---particularly in decentralized systems. <em>Nodes may enter and leave the system.</em> Thus the set of nodes in a committee may change over time, i.e., they're subject to <em>churn</em>.</p>
<p>Churn is a problem that existing secret-sharing schemes, even proactive ones---simply don't solve.</p>
</div>
<div class="section" id="churn-is-a-challenge">
<h2>Churn is a Challenge</h2>
<p>In order to understand why committee churn is not an easy problem, let's consider a naive strategy for handling it.</p>
<div class="figure">
<img alt="Handoff between two committees" class="no-border" src="https://hackingdistributed.com/images/2019-04-05-CHURP/churp2.png" style="width: 500px;" />
<p class="caption">Handoff between two committees.</p>
</div>
<p>The figure above shows two committees---equal-sized old and new committees. Due to churn, some nodes in the old committee leave (A2 and A3), while new nodes replace them (B2 and B3).
For the purpose of this example, assume that both the committees use (2,5)-secret sharing for some secret <em>SK</em>. (2,5)-secret sharing is meant to protect against compromise of two nodes.
So let's assume that a mobile adversary can control two nodes in <em>each of the old and new committees.</em></p>
<p>A naive strategy might directly transfer shares between the old nodes and the corresponding new ones that replace them.
In particular, in the above example, node A2 could give its share to node B2 before leaving, while node A3 could give its share to node B3. But this quickly falls apart in the face of a mobile adversary.
This adversary could corrupt nodes A1 and A2 in the old committee and B2 and B3 in the new committee. Thus the adversary learns a new share through node B3.
The adversary thus learns 3 shares in total. Since we're using a (2,5)-secret sharing, she thus learns <em>SK</em>, breaking the system. <a class="footnote-reference" href="#id3" id="id1">[1]</a></p>
</div>
<div class="section" id="churp-comes-to-rescue">
<h2>CHURP Comes To Rescue</h2>
<p>In a nutshell, CHURP is a proactive secret-sharing system that solves the above problem, and handles committee churn securely. It's not the first system to do this, but it's the first practical one.</p>
<p>The key innovation in CHURP is something called <em>dimension-switching</em>. Suppose, in our example above, it were somehow possible to switch temporarily from a (2,5)-sharing of <em>SK</em> to a (4,5)-sharing during the handoff from the old committee to a new one. Then, despite being able to learn 3 shares, the adversary would not learn <em>SK</em>.</p>
<p>Dimension-switching essentially "dilutes" the secret shares thus preventing leakage despite the adversary learning more during the handoff.
CHURP uses bivariate polynomials (two dimensional polynomials) to share the secret.
Switching from (2,5)-sharing to (4,5)-sharing can be achieved by switching between the two dimensions of the bivariate polynomial.
For more details of our construction, please refer to the <a class="reference external" href="https://eprint.iacr.org/2019/017.pdf">full paper</a>.</p>
<p>Another key innovation in CHURP is a tiered protocol that achieves high performance and strong robustness simultaneously. By default, CHURP uses an <em>optimistic</em> path.
It assumes that <em>all</em> nodes execute the specified protocol correctly. In this case CHURP is highly efficient.
If any node cheats (e.g., it sends malformed messages), however, CHURP can efficiently detect the fact and then switch to an alternative, <em>pessimistic</em> execution path.
In this case, the protocol runs slower but is resilient to cheating players.
The optimistic path in CHURP is especially communication-efficient.
The best known protocol prior to CHURP [<a class="reference external" href="http://www.pmg.lcs.mit.edu/papers/a34-schultz.pdf">Schultz07</a>] incurs 5GB of network bandwidth for a 100-node committee.
By comparison, CHURP (optimistic path) incurs only 2MB---a <strong>2300x</strong> improvement! In fact, even the pessimistic path of CHURP performs better than any previously known protocol.</p>
<p>CHURP has some other bells and whistles. For example, it uses a trusted setup phase, as required by a special commitment scheme [<a class="reference external" href="https://www.iacr.org/archive/asiacrypt2010/6477178/6477178.pdf">Kate10</a>] that helps keep communication costs low.
But if this trusted setup fails, CHURP still remains secure.
The innovation here is a <em>hedge</em>---an additional verification step that detects compromised trusted setup and switches to a secondary
pessimistic path that avoids the vulnerable commitment scheme, at the cost of some additional slowdown.</p>
<p>Despite the technical intricacy, using CHURP in your project is easy. At a high level, CHURP provides a concise API that enables periodic committee rotation without changing the secret. We strongly encourage you to checkout the <a class="reference external" href="https://github.com/CHURPTeam/CHURP">code</a> and play with the demo.</p>
</div>
<div class="section" id="lots-of-applications">
<h2>Lots of Applications</h2>
<p>Blockchain systems, by nature, cannot store private data. The ability of CHURP to store and manage private keys through <em>dynamic</em> committees enables interesting applications without introducing centralization.
Below, we briefly enumerate a few of the most important potential applications of CHURP.</p>
<p>1) <em>Cryptocurrency Management:</em> Rather than relying on centralized exchanges to store private keys on behalf of users, or using hardware or software wallets,
which are notoriously <a class="reference external" href="https://www.inc.com/yazin-akkawi/bitcoins-biggest-challenge-boils-down-to-two-letters-ux.html">difficult</a> to manage, users could instead store their private keys with committees.
These committees could authenticate users and enforce access-control, resulting in the decentralized equivalent of today's exchanges.</p>
<div class="figure">
<img alt="Cryptocurrency" class="no-border" src="https://hackingdistributed.com/images/2019-04-05-CHURP/churp3.jpg" style="width: 200px;" />
</div>
<p>2) <em>Decentralized Identity:</em> Initiatives such as the <a class="reference external" href="https://identity.foundation/">Decentralized Identity Foundation</a>, which is backed by a number of major IT and services firms,
envision an ecosystem in which users control their identities and data by means of private keys.
Who will store these keys and how is an <a class="reference external" href="https://medium.com/uport/the-basics-of-decentralized-identity-d1ff01f15df1">open question</a>. The same techniques used for private key management would similarly apply to assets such as identities.</p>
<div class="figure">
<img alt="Identity" class="no-border" src="https://hackingdistributed.com/images/2019-04-05-CHURP/churp4.jpg" style="width: 200px;" />
</div>
<p>3) <em>Smart-contract attestations:</em> CHURP could augment smart contracts with confidential state, allowing them to, e.g., produce attestations regarding blockchain state change.
Such signing would be of particular benefit in creating a simple smart-contract interface with <em>off-chain</em> systems.
For example, control of Internet-of-Things (IoT) devices is a commonly proposed application of smart contracts
(<a class="reference external" href="https://bitcoinmagazine.com/articles/slock-it-to-introduce-smart-locks-linked-to-smart-ethereum-contracts-decentralize-the-sharing-economy-1446746719/">smart locks</a> being a notable early example).
If smart contracts cannot generate digital signatures, then the devices they control must monitor a blockchain, a resource intensive operation <a class="reference external" href="https://blog.slock.it/slock-it-iot-layer-f305601df963">infeasible</a> for IoT devices.
A smart contract that can generate a digital signature, however, can simply issue authenticable commands to target devices.</p>
<div class="figure">
<img alt="Smart Lock" class="no-border" src="https://hackingdistributed.com/images/2019-04-05-CHURP/churp5.png" style="width: 200px;" />
</div>
<p>If you are interested in learning more about CHURP, please check out our <a class="reference external" href="https://www.churp.io">website</a>, <a class="reference external" href="https://github.com/CHURPTeam/CHURP">code</a>, or even the <a class="reference external" href="https://eprint.iacr.org/2019/017.pdf">paper</a>, co-authored with Lun Wang, Andrew Low, Yupeng Zhang, and Dawn Song, all of UC Berkeley.
We are excited to hear about any challenging use-cases for CHURP you might have!</p>
<table class="docutils footnote" frame="void" id="id3" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id1">[1]</a></td><td>There are other issues with this naive strategy such as the assumption of equal sized committees and that all nodes stay alive till the new replacing nodes join. We don't make any such assumptions in the actual protocol.</td></tr>
</tbody>
</table>
</div>
https://hackingdistributed.com/2019/04/05/CHURP/The Old Fee Market is Broken, Long Live the New Fee Market2019-01-22T11:31:00Z2019-01-22T11:31:00ZSoumya Basu and David Easley and Maureen O'Hara and Emin Gün Sirer<div class="figure align-right">
<img alt="What the wrong fee mechanism does to your coin." class="no-border" src="https://hackingdistributed.com/images/feesonusability.jpg" style="width: 400px;" />
</div>
<p>Almost all cryptocurrencies today require their users to attach fees to their transactions <a class="footnote-reference" href="#id3" id="id1">[1]</a>.
The miners then collate transactions paying the highest fees into the blockchain,
and derive an income stream. This mechanism is superficially appealing, and has led some to
push hard for a blockchain vision, dubbed the "<a class="reference external" href="https://www.ssrn.com/abstract=3055380">fee market</a>," driven almost entirely
by such fees.</p>
<p>Much of the BTC/BCH split stemmed from a difference of vision around this central point, where BTC developers wanted to steer Bitcoin away from a reliance on block rewards and towards higher transaction fees. In contrast, BCH developers reacted strongly to high fees and wanted to keep the economics of miner compensation centered primarily around block rewards. Both sides make good points: block rewards are not sustainable, because the number of coins outstanding is fixed and therefore the rewards must diminish over time. At the same time, high fees lead to terrible user experiences, where some users paid as much as $55 to send transactions, while others complained bitterly of stuck transactions.</p>
<p>In this post, we describe why the predominant fee paradigm used in cryptocurrencies is broken. We describe the reasons why the so-called "fee market" will not lead to a stable, predictable user experience.</p>
<p>In addition, we provide an alternative way to charge fees that yields a sensible fee market that has more stable, and therefore more predictable, pricing. This novel mechanism also provides lower variance, and therefore higher predictability of returns, for miners.</p>
<p>An analysis of how our mechanism would behave in Bitcoin shows that it could have saved users over $272 million dollars over December 2017, and could have reduced the variance of miners' fee revenues by a factor of 7.4.</p>
<div class="section" id="problem-with-the-current-fee-mechanism">
<h2>Problem with the Current Fee Mechanism</h2>
<p>Under the current fee paradigm, a user wishing to submit a transaction must figure out an appropriate fee.
This turns out to be a very difficult task.</p>
<p>This seems straightforward, but there are problems.</p>
</div>
<div class="section" id="fees-and-cognitive-load">
<h2>Fees and Cognitive Load</h2>
<p>The first problem is the cognitive load on the user: it is difficult to decide exactly how much
to bid, whether to go over or under. Surely, if the value of the transaction to the user is X, then her
bids will be capped at the utility of the transaction to that user (say, X mBTC). But between 0 and X, there are many
options, and picking the right one depends on a lot of other factors. Exactly how important is this transaction to me
right now? How full is the mempool? What are the competing bids? Given that there will be, in expectation, another 10
minutes of transactions streaming in to the miners before the next block is discovered, how low a fee can she get away with while still getting her transaction included in the blockchain? These are clearly difficult questions. Doing the right thing involves watching the chain closely and monitoring the transaction until it is included, an act that detracts from the hassle-free use of one's money. One may be tempted to just go through a middleman, such as an exchange, to handle it all, which creates centralization and simply recreates the current banking system except with unregulated exchanges as centralized custodians. This has, historically, led to a stream of exit scams and other SFYL-events <a class="footnote-reference" href="#id4" id="id2">[2]</a>.</p>
</div>
<div class="section" id="broken-attempts">
<h2>Broken Attempts</h2>
<p>But the real problem is much more fundamental, and it stems from the fact that the fee mechanism in Bitcoin and other
currencies is implemented as a pay-what-you-bid, or multi-unit first-price, auction. This fee behavior will lead to "sticky" and unnecessarily high fees, followed by sudden fee collapses, just as we have seen over the course of the last few years.</p>
<p>To see why, imagine a universe where everyone is using a simple historical fee estimator.
Specifically, imagine a fee estimator that looks back on historical transactions and suggests a fee based on what happened on average in the past.
If transactions were paying an average of 100 satoshis per byte in the recent past, then a user will simply attach a fee of 100 or more satoshis per byte.</p>
<p>This approach is completely broken. During times of congestion, the fees to get included will naturally go up, as they should. If the transactions momentarily arrive faster than blocks are found, the fees attached will go up. But <em>they will remain high even after the congestion has ended</em>. If, for instance, the rate of arrival for transactions is exactly equal to the rate at which blocks clear them, the system should be able to support 0-fees. And yet this approach will force the users to pay fees as if they were operating at the height of congestion. The fee structure that arises during congestion is ensconced in the system even though the conditions have changed, an artifact of bad mechanism design.</p>
<p>One can imagine other fee estimation heuristics, such as underbidding on purpose to explore if one can get away with paying less, that would yield better results. But it all is up to the vagaries of <strong>other people's choices of fee estimators</strong> that would determine the prices. A smart user who notices that the fee estimators are broken would have little recourse except to pay the prevailing fees. The system would provide no mechanism by which optimal choices are made. And of course, the mavericks who do try to explore paying lower fees, just to see if the entire ensemble of users could move to a lower price point, will have to deal with stuck transactions and transaction delays.</p>
<p>Blockchains are not the only place where similar auction mechanisms have led to poor user experiences.
When ad placement in search engines was based on first price auctions, researchers observed similar patterns.
Advertisers would compete with each other in order to get a better placement, driving the fees sky high.
This would be followed by many participants quitting the game, which would cause a precipitous drop,
only to restart the unstable cycle all over again.</p>
</div>
<div class="section" id="a-better-fee-market">
<h2>A Better Fee Market</h2>
<p>In a <a class="reference external" href="https://www.ssrn.com/abstract=3318327">new paper</a>, we propose a new mechanism to charge for transactions. This mechanism is
only a slight code change away from the old mechanism, but it has the potential to yield a much more
stable fee market, a much better user experience as well as a big savings in fees,
and a more predictable revenue stream for miners.</p>
<p>Our proposed mechanism is fairly simple: transactions specify a fee, just like before, and miners place transactions in a block, just like before. Except, instead of charging each transaction the fee it bid, we charge each transaction the lowest fee charged to any transaction in that block. Any surplus fees a transaction offered are returned to that user, to a designated address they specify. Hence, a transaction in effect says "I am willing to pay up to $30 for this transaction," but is charged only $5 if the lowest fee transaction in that block paid $5. The remaining $25 are returned to the user.</p>
<p>Our proposed mechanism brings insights from multi-unit second price auctions to the world of cryptocurrenices where
currently multi-unit first price auctions are the norm. Whereas before, fee selection was a stressful and difficult
task, with our mechanism, users can simply attach to their transaction the true maximum value they would be happy to pay. This is because they are not going to be charged that value: instead, they are charged whatever the minimum was to get into that block. In essence, the lowest paying transaction establishes just how little it took to get into that block, and everyone within that block is charged the same amount per byte. This is not only equitable and fair, but it takes away pressure to play games with fee selection.
As an additional bonus, the benefit from playing such games decreases as the blockchain gets more popular, further disincentivizing strategic fee selection.
Note also that it picks the highest-paying transactions, just like first-price auctions, though it charges them strictly less than what they would be willing to bear.</p>
<p>Our proposal couples this idea with three other mechanisms to provide a comprehensive solution that prevents malicious behavior by miners.
First, if a miner fails to fill a block, they cannot charge any fees. So a miner cannot take a high-paying transaction, ignore the rest of the mempool, and collect the entirety of the fee paid by that transaction while refusing to fill a block. They are, of course, free to fill the rest of the block with their own synthetic transactions, but they will have to pay out of their own pockets for those (and the next point addresses why the miner cannot just pay those additional fees directly to himself).
Second, a miner is rewarded the average fee collected from the last B blocks, not just the fees from block they themselves mined. This ensures that miners also have little incentive to act strategically. Instead their best interests are aligned with maximizing the number of high-value transactions cleared per second.
Finally, we propose that every block reserves some space, around 20%, which are exempt from this mechanism.
This enables a miner to include transactions of high importance to themselves,
such as those used for pool rewards, without affecting the fee mechanism and without being penalized.
This addresses the miners' needs and provides a simple migration path from the current state of affairs.</p>
<p>Our mechanism brings order to the chaotic fee market of today. During the dramatic Bitcoin price increase in December 2017, we estimate that users would have saved over $272 million in transaction fees and miners would have a much more predictable fee revenue, reducing their daily fee variance an average of 7.4 times. These gains are not surprising -- our mechanism enables transaction fees to correspond to the true demand that users have for block space. This will make the fee markets more predictable. As a result, users can just bid their true value and know that they are not going to be overpaying for block space. Thus, this removes a painful strategic element to simply using cryptocurrencies today.</p>
<p>If you are designing new cryptocurrencies or are an active user of existing currencies, we highly encourage you to push for this fee mechanism instead of unstable first-price auctions.</p>
</div>
<hr class="docutils" />
<div class="section" id="footnotes">
<h2>Footnotes</h2>
<table class="docutils footnote" frame="void" id="id3" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id1">[1]</a></td><td>Cryptocurrencies that provide fee-free transactions for all are horribly and trivially broken, as they are open to simple, flooding-based denial-of-service attacks.</td></tr>
</tbody>
</table>
<table class="docutils footnote" frame="void" id="id4" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id2">[2]</a></td><td>SFYL: Sorry For Your Loss.</td></tr>
</tbody>
</table>
</div>
<div class="section" id="related">
<h2>Related</h2>
<p>Ron Lavi, Or Sattath, and Aviv Zohar have <a class="reference external" href="https://arxiv.org/abs/1709.08881">proposed a similar protocol</a> to ours,
however they differ in a few fundamental ways. Similar to our proposal, they propose a protocol in which the winning miner
places transactions into a block and charges all transactions the lowest fee proposed by any transaction placed in that block.
Lavi, Sattah and Zohar assume a single monopolistic miner, and strive to maximize revenue from fees at a cost of lower social
welfare. In contrast, our work explicitly targets maximizing social welfare, and operates under a model with many miners. In
their system, the monopolistic miner is incentivized to leave transactions offering positive fees out of the block even if
there is space in the block as including them reduces the uniform price he can charge. This, of course, maximizes miner revenue,
but we believe that the first criterion for a viable protocol must be to use the blockchain efficiently, as otherwise users are
discouraged from participation. Their non-manipulation result is stronger than the one we obtain from our mechanism since we
only obtain declining gain from manipulation as the system grows, but it comes at a cost of lower social welfare and desirable
metrics such as transaction throughput and latency. Finally, in both our protocol and the protocol proposed by Lavi, Sattah and
Zohar, users’ incentive to behave strategically vanishes as the number of users grows.</p>
<p>Vitalik Buterin has <a class="reference external" href="https://ethresear.ch/t/draft-position-paper-on-resource-pricing/2838">proposed an alternative approach</a>
based on miners estimating, and dynamically adjusting, a single fee that
is charged uniformly to all transactions within a block, coupled with dynamically varying the block size to accommodate demand.
This approach differs from ours in a few key ways. First, it does not aim to maximize social welfare, and instead adopts heuristics
to modify two independent variables, fees and block size. The former goal, adopted by our work, will maximize transactions cleared
subject to any desired block size constraint, determined by any desirable mechanism. Since block size is a primary determinant of
security and centralization, we believe it is prudent to decouple its management from the fee mechanism. Second, it assumes that
the demand curve is known to the protocol, though it makes no assumptions about its behavior. If the demand curve could be inferred
accurately such that all transactions whose utility exceeds the block fee can always be accommodated, then Buterin's proposal would
have no incentive issues. However, inferring demand curves is difficult in adversarial, Byzantine environments, which is why auction
mechanisms are used. Finally, this approach has not been proven to be resistant to manipulation by users and miners. If it is not
resistant to manipulation, then this mechanism will suffer from the same problem as the current first price mechanism, where users
have to solve the fee selection problem all over again.</p>
</div>
https://hackingdistributed.com/2019/01/22/doing-fees-right/One File for the Price of Three: Catching Cheating Servers in Decentralized Storage Networks2018-08-06T18:00:00Z2018-08-06T18:00:00ZEthan Cecchetti and Ian Miers and Ari Juels<p>Hundreds of millions of dollars are riding on the solution to a really hard problem. Here we offer a solution.</p>
<p>Decentralized Storage Networks (DSNs) such as Filecoin want to store your files on strangers spare disk space. To do it properly, you need to store the file in multiple places since you don't trust any individual stranger's computer. But how do you differentiate between three honest servers with one copy each and three cheating servers with one copy total? Anything you ask one server about the file it can get from its collaborator.</p>
<p>We are pleased to serve up the world's first <em>provably secure, practical</em> Public Incompressible Encoding (PIE). A PIE lets you encode a file in multiple different ways such that anyone can decode it, but nobody can efficiently use one encoding to answer questions about another. Using known techniques, you can ask simple questions to see if someone is storing an encoded replica, giving you a Proof of Replication (PoRep). With a PoRep, it's just a hop, skip, and a jump to secure decentralized storage and much more. But we're excited and getting way ahead of ourselves. Let's start from the beginning.</p>
<div class="section" id="the-pitch">
<h2>The Pitch</h2>
<p>The world has an insatiable hunger for storage. We're <a class="reference external" href="https://www.eetimes.com/author.asp?doc_id=1330462">using more and more storage</a>, and we're doing it <a class="reference external" href="https://www.statista.com/statistics/751749/worldwide-data-storage-capacity-and-demand/">much faster than we can build more</a> But tons of storage devices—laptop and desktop hard drives, standalone storage systems, flash memory on mobile phones, and more, and more, and more—are sitting idle.</p>
<p>Wouldn't it be nice if all of this unused storage could be tapped? And wouldn't it be nicer still if its owners could profit from it? We may never know why Ethan configured his desktop with 6 TB of space despite needing less than 500 GB, but given his grad student stipend, it would be great to see him get something out of it.</p>
<p><strong>The Billion-Dollar Token:</strong> <em>Decentralized Storage Networks</em> (DSNs), such as Sia, Storj, MaidSafe, and the soon-to-be-released Filecoin, promise to do exactly this. They offer users tokens for renting out their own unused storage. Yes, you can turn your hard drive into an AirBnB for bits.</p>
<p>Of course, you need a mechanism to make sure users actually store what they say they are storing. This is fairly easy at least in concept: to check that someone a copy of the file, ask them for small pieces of it at random periodically.</p>
<p><strong>The Billion-Dollar Problem:</strong> If you store your data with Amazon or Google or Microsoft, they promise to store your file multiple times. That way if a hard drive fails, your pictures aren't gone. Our DSN needs to do the same thing. After all, the average home computer is not reliable. To store our pictures on three computers, we need to pay each one of them, but that's a small price to pay if it makes the data safe, right? Unfortunately, you are unlikely to get what you pay for.</p>
<p>Once there is money involved, people take shortcuts for profits. Instead of storing one file three times at $1 per copy, a set of servers can pretend to do so and only keep one copy between all three of them. Whenever anyone checks if one of their computers has a copy, they can use the single shared copy to pass any test asked of them. As a result, they can free up space to store two more files which they also falsely claim to have replicated. This nets them 3X as much money. It also means there's only one copy of your pictures and it's probably on the least reliable computer they could find (those are cheaper, after all). The only practical way around this is to make the three copies entirely distinct.</p>
<p>Sia, Storj, MaidSafe, etc. do this by encrypting your pictures three times and distributing those encrypted copies. This strategy solves the problem perfectly if you're the one encrypting your photos and someone else is storing them.</p>
<p><strong>The Challenge:</strong> But what happens if the data isn't your family photos? What happens if it's a public good like blockchain state? Blockchains are getting huge—<a class="reference external" href="https://etherscan.io/chart2/chaindatasizefast">Ethereum is over 75 GB and growing fast</a>—and it's really expensive to have everyone store everything. But who encrypts the blockchain state? Everyone needs to read it. And worse, we don't trust anyone to encrypt it properly. We need something totally different that anyone can check and anyone can decode.</p>
<p>We need a <em>Public</em> Incompressible Encoding. In fact, this is needed for many settings, for example if you want some mining process that depends on miners storing arbitrary files.</p>
</div>
<div class="section" id="making-pies-is-hard">
<h2>Making PIEs is Hard</h2>
<img alt="https://hackingdistributed.com/images/2017-bitcoin/pie1.jpg" class="align-center" src="https://hackingdistributed.com/images/2017-bitcoin/pie1.jpg" style="width: 80%;" />
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>If anyone can decode data and we can't trust anyone to encode it properly, then how, you might ask, could we possibly prevent the attack we described above? Assume we have a malicious server, Mallory, who tries to store as little as possible. Sure, we can encode a file in three different ways, but if everything is public, our cheating storage server Mallory can still just store one copy. When we query her for an encoding, she can re-encode her one copy to whichever encoding was asked for. It seems impossible to stop.</p>
<p>Indeed, without secret information, we need some other way to restrict Mallory. We can do this with time. If re-encoding any piece of the file takes a few minutes, we can probably tell if we ask for the file and Mallory responds in two minutes instead of two seconds. At the same time, if we don't do the encoding piecemeal, we can make encoding an entire file reasonably efficient.</p>
<p>This idea isn't new. Filecoin <a class="reference external" href="https://filecoin.io/proof-of-replication.pdf">suggested using timing</a>, as did <a class="reference external" href="http://www.arijuels.com/wp-content/uploads/2013/09/vDJORST12.pdf">van Dijk et al.</a> for some related mechanisms. So what's the problem?</p>
<p>Well, van Dijk et al. mix data blocks together in a configuration that makes recomputing discarded data really slow...if Mallory is pulling all of her data from the same rotational hard drive. These days that's probably not a good assumption, especially if she's cheating with custom hardware. Filecoin suggested something simpler: a long repeated chain of operations over the whole file. Unfortunately, van Dijk et al. anticipated this construction and showed that Mallory can save most of the storage and almost all of the computation by storing carefully-selected intermediate values.</p>
<p>So Filecoin's approach is broken and van Dijk et al. makes assumptions about hardware that no longer hold. Given the ability to build ASICs, we need to be extremely careful about what assumptions we use. Indeed, many things that were billed as ASIC resistant—such as Equihash and Ethhash—now have ASICs.</p>
<p>Rather than finding some new hardware assumption for storage, we build our PIE around some fairly slow <em>key derivation function</em> (KDF)—what non-cryptographers might refer to as a kind of hash function. For the prototype, we use scrypt, which exploits the fact that it appears hard to accelerate sequential memory access without advances in commodity RAM, but if that turns out to be false, any moderately slow KDF will do. To make things really slow, we force Mallory to run our KDF a large number of times in sequence.</p>
<p>This, of course, raises the all-important question: how do we force that work to be sequential? Enter graph theory.</p>
<p><strong>Computation as a DAG:</strong> To understand data dependencies within our computation, we represent it as a <em>directed acyclic graph</em> (DAG). Each vertex represents an operation with two steps: first we derive a key using KDF, and then we encrypt some data using that key. Edges are either <em>data edges</em>, representing the inputs and outputs of the encryption, or <em>key edges</em>, representing the data used by KDF to derive the key. In the below graph red dashed edges are key edges, while solid black edges are data edges.</p>
<img alt="https://hackingdistributed.com/images/2017-bitcoin/pie2.jpg" class="align-center" src="https://hackingdistributed.com/images/2017-bitcoin/pie2.jpg" style="width: 55%;" />
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>We can use this formulation to track how much sequential work needs to happen. Because we need the output from one vertex in order to compute the values at any of its children, a path in the graph corresponds to inherently sequential work. Any key edges in that path require inherently sequential calls to KDF. Therefore, if the graph has long paths of key edges, encoding will require a lot of sequential work.</p>
<p>But we don't just care about encoding the whole file. We need Mallory to do a lot of work even if she only throws away some of the data. Worse, Filecoin's original proposal was broken by storing intermediate values, so we need to watch out for that as well. Since every piece of data in our computation is the output of a vertex in our graph representation, we can view Mallory storing data (intermediate or not) as removing vertices from the graph. We now need to check that even if Mallory removes a bunch of carefully-selected vertices, there is still a path with a lot of key edges.</p>
<p><strong>Depth-robust graphs:</strong> A depth-robust graph (DRG) is a DAG with almost exactly this property: even if a (potentially sizable) fraction of the vertices are removed, it retains a <em>long path</em>. This is a very useful property. Even if Mallory stores a lot of data, she can't remove all of the long paths. Ben Fisch first suggested applying this insight to PoReps in a manner similar to how <a class="reference external" href="https://eprint.iacr.org/2011/553.pdf">Mahmoody et al.</a> applied it to prove inherently sequential work.</p>
<p>The obvious way to build a PIE using a DRG is to cut the file into one piece per vertex and then scramble it one vertex at a time. The edges of the DRG become key edges, so the output of one vertex is needed (along with the relevant file block) to compute the next. Now, if Mallory throws away too much of the file, there will be a long path left in the DRG and she'll have to do a lot of sequential work to recompute the discarded data.</p>
<p>Unfortunately, this doesn't quite solve our problem. Mallory will need a lot of sequential work to recompute <em>some</em> of the discarded data, but not all of it. Some pieces will be easy to encode. Mallory can simply not store those pieces, instead opting to re-encode them on-the-fly. The first vertex in the path, for example, needs only one call to KDF.</p>
<p>In <a class="reference external" href="https://www.youtube.com/watch?v=8_9ONpyRZEI">talks on possible approaches</a> in early 2018, Fisch proposed this approach and suggested addressing this concern using erasure coding, making the file internally redundant. The idea was that at least some of the redundancy would be hard to re-encode. That approach, however, has several downsides. First, it expands the size of the file, potentially substantially. Second, it still allows cheating servers to gain some storage advantage over honest ones since they can discard some of the redundant data. This is a practical problem if storage providers are compensated for the full size of the expanded file. Finally, because each encoded block is required to recover not only its own plaintext but that of each of its children in the DRG, there was no clear technique for detecting a server that discarded enough data to make the file unrecoverable, nor a formal definition of what that would mean. <a class="footnote-reference" href="#id2" id="id1">[1]</a></p>
<p>We opted for a different, and we believe cleaner, approach. We want to force Mallory to do a long sequential computation for <em>any</em> block of data she's thrown away. Ensuring this while providing ASIC resistance is the critical challenge that any solution must confront.</p>
</div>
<div class="section" id="dagwood-sandwich-graphs-dsags">
<h2>Dagwood Sandwich Graphs (DSaGs)</h2>
<p>The approach we take involves feeding the outputs of a depth-robust graph into a second type of graph called butterfly graph, which has the property that there's a path from every input node to every output node. This graph, with its dependency of every output on every input, helps ensure, roughly speaking, that Mallory must compute every input node, and therefore cannot avoid computing along the long sequential path in the depth-robust graph. In actual fact, to ensure this dependency, we must layer a sequence of depth-robust graphs and butterfly graphs, something like this (where the <em>G</em>s are depth-robust graphs and the <em>B</em>s are butterfly graphs):</p>
<img alt="https://hackingdistributed.com/images/2017-bitcoin/pie3.png" class="align-center" src="https://hackingdistributed.com/images/2017-bitcoin/pie3.png" style="width: 100%;" />
<p>We think this construction looks like the famous Dagwood sandwiches from the cartoon strip Blondie, dramatized by the following lunch:</p>
<img alt="https://hackingdistributed.com/images/2017-bitcoin/pie4.jpg" class="align-center" src="https://hackingdistributed.com/images/2017-bitcoin/pie4.jpg" style="height: 20em;" />
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>Yes, indeed. As if things weren't confusing enough, our PIE is made from a terrifying sandwich.</p>
</div>
<div class="section" id="from-pie-to-dsn">
<h2>From PIE to DSN</h2>
<img alt="https://hackingdistributed.com/images/2017-bitcoin/pie5.png" class="align-center" src="https://hackingdistributed.com/images/2017-bitcoin/pie5.png" style="width: 70%;" />
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>Given a PIE, a potentially malicious storage provider like Mallory can prove that she's honestly replicating files. But how do you build a DSN using a PIE?</p>
<p>Pretty much any PIE-based DSN architecture involves two steps for a given file:</p>
<ol class="arabic simple">
<li>Prove once that the file is encoded correctly.</li>
<li>Audit by verifying continuously that the file is intact.</li>
</ol>
<p>Let's start with (1). Storage providers in the DSN must prove to somebody—the file owner or the network—that an encoding <em>G</em> of a file <em>F</em> is a correct PIE. Given an authenticated version of <em>F</em>, such as a hash stored in a trusted place, it's easy to verify that a PIE is correct. It is <em>public</em>, after all, so anyone can decode <em>G</em> to recover <em>F</em>. Decoding is an expensive operation, though, so it would be nice to have an efficient way to prove correctness. We propose a different approach in our work, showing how to efficiently prove that <em>G</em> is <em>incorrect</em>. This creates conditions for various "cryptoeconomic" approaches to verification, such as slashing a provider proven to be cheating. Alternatively, SNARKs offer efficient means to verify, if not prove, correctness.</p>
<p>As for (2), it's not much help for <em>G</em> to be correct if it goes missing. It's critical to <em>continuously</em> check that storage providers are still storing <em>G</em> and haven't thrown data away. There are a number of efficient, <a class="reference external" href="http://www.arijuels.com/wp-content/uploads/2013/09/JK07.pdf">well</a> <a class="reference external" href="https://hovav.net/ucsd/dist/verstore.pdf">established</a> <a class="reference external" href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.478.5509&rep=rep1&type=pdf">techniques</a> for this purpose. A simple one used by existing DSNs is to build a Merkle tree on <em>G</em> whose leaves are <em>G</em>'s blocks and publish the root. A storage provider can then be challenged to provide a random leaf and prove its correctness by furnishing a path to the published root.</p>
<p>A blockchain can then perform the auditing. This could be an existing blockchain like Ethereum or, as with schemes like Sia, a new one whose consensus algorithm is independent of its storage goals. A particularly intriguing option, though, is to create a new blockchain where <strong>audit = mining</strong>.</p>
<p>The allure is an escape from the waste and environmental destruction of Proof of Work (PoW). Mining via useful storage was first proposed in 2014 in <a class="reference external" href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/permacoin.pdf">Permacoin</a>. Permacoin required that nodes store a large, predetermined, public archive, however, and was only slightly less wasteful than conventional PoW.</p>
<p>PIEs completely change the game. Because they are incompressible, we eliminate any clever tricks a malicious miner could pull by specially crafting the files. Even if the file is all zeros, the miner must store the full encoded version. PIEs enable use of <em>any</em> files, even those owned by miners themselves, for mining = audit. This is the idea proposed in Filecoin, for example. Ultimately, PIEs may enable a secure and efficient consensus protocol whose main resource is storage devoted useful data, rather than wasted electricity. Provably secure, efficient PIEs are a first step in this direction, and <a class="reference external" href="https://eprint.iacr.org/2018/601.pdf">parallel work by other teams</a> happily promises to provide a stable of complementary, evolving techniques.</p>
<p>Creating a provably secure and minimal-waste scheme for mining = audit is the next step. IC3 is beavering away at it. We're happy to hear from partners who'd like to help create the next generation of DSN.</p>
<p>A full copy of this work can be found on the <a class="reference external" href="https://ia.cr/2018/684">IACR Cryptology ePrint Archive</a>.</p>
</div>
<div class="section" id="backstory-and-acknowledgements">
<h2>Backstory and Acknowledgements</h2>
<p>We would like to give a special thanks to <a class="reference external" href="https://sites.google.com/site/benafisch/">Ben Fisch</a> and <a class="reference external" href="http://jbonneau.com/">Joe Bonneau</a> for alerting us to the utility of depth-robust graphs. We had been working concurrently on the proof of replication problem initially using an approach based on <a class="reference external" href="http://www.arijuels.com/wp-content/uploads/2013/09/vDJORST12.pdf">hourglass functions</a>, and butterfly graphs. We had several approaches that looked promising but, when we tried to prove them secure, turned out to be subtly but completely broken. We learned an important lesson: This problem space is deceptively hard. We therefore chose to hold off talking about results until we had a concrete construction and security proof. Only now have we gotten there.</p>
<p>After <a class="reference external" href="https://www.youtube.com/watch?v=8_9ONpyRZEI">Ben's January 2018 talk</a> on their approach, we had very useful discussions with both Ben and Joe about what security they could potentially achieve. This communication helped us identify the need for strong definitions that neither allowed a cheating server to discard any of the file risk-free nor required file expansion. It was not clear, though, how to adapt the construction Ben presented—effectively the strawman DRG construction given earlier in this post—to satisfy these properties. We got the idea of using depth-robust graphs in a different manner, combining them with hourglass functions. It took some time and very different definitions, but we ultimately arrived at a solution that provably achieved our desired properties.</p>
<p>We would also like to thank <a class="reference external" href="https://juan.benet.ai/">Juan Benet</a> and <a class="reference external" href="https://nicola.io/">Nicola Greco</a> at <a class="reference external" href="https://protocol.ai/">Protocol Labs</a> for helping clarify the motivation and practical requirements of the context.</p>
</div>
<div class="section" id="footnotes">
<h2>Footnotes</h2>
<table class="docutils footnote" frame="void" id="id2" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id1">[1]</a></td><td>Since we submitted this work, Fisch et al. have released <a class="reference external" href="https://web.archive.org/web/20180717050203/https://web.stanford.edu/~bfisch/porep.pdf">three</a> <a class="reference external" href="https://web.archive.org/web/20180717163151/https://web.stanford.edu/~bfisch/porep_short.pdf">different</a> <a class="reference external" href="https://eprint.iacr.org/2018/702.pdf">articles</a> with in-depth descriptions of constructions and formal theory that differs significantly from his talks. At the time of posting we had not had a chance to read and digest these.</td></tr>
</tbody>
</table>
</div>
https://hackingdistributed.com/2018/08/06/PIEs/On-Chain Vote Buying and the Rise of Dark DAOs2018-07-02T15:22:00Z2018-07-02T15:22:00ZPhilip Daian and Tyler Kell and Ian Miers and Ari Juels<p>Blockchains seem like the perfect technology for online voting. They can act as “bulletin boards,” global ledgers that were hypothesized (but never truly realized) in decades of e-voting research. Better still, blockchains enable smart contracts, which can execute on-chain elections autonomously and do away with election authorities.</p>
<p>Unfortunately, smart contracts aren’t just good for running elections. They’re also good for <em>buying</em> them.</p>
<p>In this blog post, we’ll explain how and why. As an example, we’ll present a fully implemented, simple vote buying attack against the popular on-chain CarbonVote system. We’ll also discuss how trusted hardware enables even more powerful vote buying techniques that seem irresolvable even given state-of-the art cryptographic voting protocols.</p>
<p>Finally, we introduce a new form of attack called a <em>Dark DAO</em>, not to be confused with <em>the</em> <a class="reference external" href="https://medium.com/@oaeee/the-rise-of-the-dark-dao-72b21a2212e3">“Dark DAO”</a> the same way DAOs should not be confused with <a class="reference external" href="http://hackingdistributed.com/2016/06/18/analysis-of-the-dao-exploit/">The DAO</a>. A Dark DAO is a decentralized cartel that buys on-chain votes opaquely (“in the dark”). We present one concrete embodiment based on Intel SGX.</p>
<p>In such an attack, potentially nobody, not even the DAO’s creator, can determine the DAO’s number of participants, the total amount of money pledged to the attack, or the precise logic of the attack: for example, the Dark DAO can attack a currency like Tezos, covertly collecting coins until it reaches some hidden threshold, and then telling its members to short the currency. Such a Dark DAO also has the unique ability to enforce an <em>information asymmetry</em> by sending out, for example, deniable short notifications: members inside the cartel would be able to verify the short signal, but themselves could generate seemingly authentic false signals to send to outsiders.</p>
<p>The existence of trust-minimizing vote buying and Dark DAO primitives imply that users of all on-chain votes are vulnerable to shackling, manipulation, and control by plutocrats and coercive forces. This directly implies that all on-chain voting schemes where users can generate their own keys outside of a trusted environment inherently degrade to plutocracy, a paradigm <a class="reference external" href="https://en.wikipedia.org/wiki/Plutocracy">considered widely inferior to democratic models</a> that such protocols attempt to approximate on-chain.</p>
<p><strong>All of our schemes and attacks work regardless of identity controls</strong>, allowing user actions to be freely bought and sold. This means that schemes that rely on user-generated keys bound to user identities, like uPort or Circles, are also inherently and fundamentally vulnerable to arbitrary manipulation by plutocrats. Our schemes can also be repurposed to attack proof of stake or proof of work blockchains profitably, posing severe security implications for all blockchains.</p>
<div class="section" id="blockchain-voting-today">
<h2>Blockchain Voting Today</h2>
<p>Blockchain voting schemes abound today. There’s <a class="reference external" href="https://votem.com/">Votem</a>, an end-to-end verifiable voting scheme that allows voting using mobile devices and leverages the blockchain as a place to securely post and tally the election results. Remix, the popular smart contract IDE, offers an election-administering smart contract as its training example. Yet more examples can be found <a class="reference external" href="https://cointelegraph.com/news/eip-999-why-a-vote-to-release-parity-locked-funds-evoked-so-much-controversy">here (1)</a>, <a class="reference external" href="https://futurism.com/the-dao-heist-undone-97-of-eth-holders-vote-for-the-hard-fork/">here (2)</a>, and <a class="reference external" href="https://ethereumworldnews.com/eos-voting-abp/">here (3)</a>.</p>
<p>On-chain voting schemes face many challenges, privacy, latency, and scaling among them. None of these is peculiar to voting, and all will eventually be surmountable. Vote buying is a different story.</p>
<p>In political systems, vote buying is a pervasive and corrosive form of election fraud, with a <a class="reference external" href="https://en.wikipedia.org/wiki/Electoral_fraud#Vote_buying">substantial history</a> of undermining election integrity around the world. Sometimes, the price of a vote is a <a class="reference external" href="https://www.washingtonpost.com/politics/decision2012/selling-votes-is-common-type-of-election-fraud/2012/10/01/f8f5045a-071d-11e2-81ba-ffe35a7b6542_story.html">glass of beer</a>. Thankfully, as scholars have <a class="reference external" href="https://polsci.umass.edu/file/904/download?token=wqyQOKoP">observed</a>, normal market mechanisms usually break down in vote buying schemes, for three reasons. First, vote buying is in most instances a crime. In the U.S., it’s punishable under <a class="reference external" href="https://www.law.cornell.edu/uscode/text/18/597">federal law</a>. Second, where secret ballots are used, compliance is hard to enforce. A voter can simply drink your beer, and cast her ballot in secret however she likes. Third, even if a voter does sell their vote, there is no guarantee the counter-party will pay.</p>
<p>No such obstacles arise in blockchain systems. Vote buying marketplaces can be run efficiently and effectively using the same powerful tool for administering elections: smart contracts. Pseudonymity and jurisdictional complications, as always, provide (some) cover against prosecution.</p>
<p>In general, electronic voting schemes are in some ways harder to secure against fraud than in-person voting, and have been the <a class="reference external" href="https://en.wikipedia.org/wiki/End-to-end_auditable_voting_systems">subject of general and academic interest</a> for many years. One of the fundamental building blocks was <a class="reference external" href="https://www.freehaven.net/anonbib/cache/chaum-mix.pdf">introduced early by David Chaum</a>, providing anonymous mix networks for messages which could be anonymously sent by participants with receipts of inclusion. Such end-to-end verifiable voting systems, where users can check that their votes are correctly counted without sacrificing privacy, are not just the realm of theoreticians and have <a class="reference external" href="https://www.usenix.org/legacy/event/sec10/tech/full_papers/Carback.pdf">actually been used for binding elections</a>.</p>
<p><a class="reference external" href="https://dl.acm.org/citation.cfm?id=195407">Later work by Benaloh and Tuinstra</a> took issue with electronic voting schemes, noting that they offered voters a “receipt” that provided cryptographic proof of which way a given vote had been cast. This would allow for extremely efficient vote buying and coercion, clearly undesirable properties. The authors defined a new property, <em>receipt-freedom</em>, to describe voting schemes where no such cryptographic proof was possible. <a class="reference external" href="https://eprint.iacr.org/2002/165.pdf">Further work by Juels, Catalano, and Jakobsson</a> modeled even more powerful coercive adversaries, showing that even receipt-free schemes were not sufficient to prevent coercion and vote buying. This work defined a new security definition for voting schemes called “coercion resistance”, providing a protocol where no malicious party could successfully coerce a user in a manner that could alter election results.</p>
<p>In their work, Juels et. al note that “the security of our construction then relies on generation of the key pairs… by a trusted third party, or, alternatively, on an interactive, computationally secure key-generation protocol such as [24] between the players”. Such “trusted key generation”, “trusted third party”, or “trusted setup” assumptions are standard in the academic literature on coercion resistant voting schemes. Unfortunately, these requirements <em>do not</em> translate to the <a class="reference external" href="https://eurocrypt.iacr.org/2017/slides/A04-analysis-of-the-blockchain.pdf">permissionless</a> model, in which nodes can come and leave at any time without knowing each other a priori. This (somewhat) inherently means users generate their own keys in all such deployed systems, and cannot take advantage of trusted multiparty key generation or any centralized key service arbiter.</p>
<p>The blockchain space today, with predictable results, continues its tradition of ignoring decades of study and instead opts to implement the most naive possible form of voting: directly counting coin-weighted votes in a plutocratic fashion, stored in plain text on-chain. Unfortunately, it is not clear that better than such a plutocracy is achievable on-chain. We show that the permissionless model is fundamentally hostile to voting. Despite any identity or second-layer based mitigation attempts, <strong>all permissionless voting systems</strong> (or schemes that allow users to generate their own key in an untrusted environment) are vulnerable to the same style of vote buying and coercion attacks. Many vote buying attacks can also be used for coercion, shackling users to particular voting choices by force.</p>
<div class="figure">
<img alt="Man in suit shoving a dollar bill in voter's pocket." class="no-border" src="https://hackingdistributed.com/images/2017-bitcoin/vv1.jpg" style="width: 500px;" />
<p class="caption">That's a nice on-chain vote you've got there...</p>
</div>
<p>It is worth noting that the severity of bribery attacks in such protocols was <a class="reference external" href="https://vitalik.ca/general/2018/03/28/plutocracy.html">partially explored by Vitalik Buterin</a>, though concrete mechanisms were not provided. Here we describe frictionless mechanisms useful for vote, identity buying, coercion, and coordination at a high level and discuss the implications of these particular mechanisms.</p>
</div>
<div class="section" id="attack-flavors">
<h2>Attack Flavors</h2>
<p>Consider a very simple voting scheme: Holders of a token get one vote per token they hold and can change their votes continually until some closing block number. We’ll use this simple “EZVote” scheme to build intuition for how our attacks can work in any on-chain voting mechanism.</p>
<p>There are several possible escalating attack flavors of such a scheme.</p>
<div class="section" id="simple-smart-contracts">
<h3>Simple Smart Contracts</h3>
<p>The simplest low-coordination attack on on-chain voting systems involves vote buying smart contracts. Such smart contracts would simply pay users upon a provable vote for one option (or to participate in the vote, or to abstain from the vote if the vote is not anonymous). In EZVote, the smart contract could be a simple contract that holds your ERC20 until after the end date, votes yes, and returns it to you; all guarantees in the contract could be enforced by the underlying blockchain.</p>
<p>Such a scheme has advantages in that it requires only the trust assumptions already inherent in the underlying system, but has substantial disadvantages as well. For one, it is likely possible to publicly tell how many votes are purchased after the election is over, as this is required to handle the flow of payments in today’s smart contract systems. Also, the in-platform nature of the bribe <a class="reference external" href="https://www.ethnews.com/arbitration-forum-orders-more-eos-accounts-frozen-but-apparent-hoax-muddies-waters">opens it to censorship</a> by parties interested in preserving the health of the underlying platform/system.</p>
<p>Depending on the nature of the voting scheme and the underlying protocol, there may be some workarounds for these downsides. Voters could for example provide a ring signature proving to a vote buyer that they are in a list of voters who votes yes in exchange for payments. We leave the implementation details and generalizability of such schemes open.</p>
<p>In general, any mechanism for private smart contracts can also be used for private vote buying, solving the public nature of a smart contract based attack; cryptographically an equivalent would be the vote buyer and seller generating a secret key for funds storage via MPC together, signing two transactions: a yes vote and a transaction that released funds to the vote seller after the end of the interval. The vote seller would move funds to this key only after possessing the transaction guaranteeing a refund and payment.</p>
<p>This would look similar to previous work on <a class="reference external" href="https://arxiv.org/pdf/1706.03370.pdf">distributed certificate generation</a>, adding security analysis for ensuring fairness. A naive implementation of such a scheme would encumber a users’ use of funds for other purposes during the vote (such actions are possible but require cooperation on behalf of the vote buyer; alternatively, a trusted/bonded escrow party can be used).</p>
</div>
<div class="section" id="trusted-hardware-buying">
<h3>Trusted Hardware Buying</h3>
<p>An even more concerning vote buying attack scheme involves the use of trusted hardware, such as <a class="reference external" href="https://en.wikipedia.org/wiki/Software_Guard_Extensions">Intel SGX</a>. Such hardware has a key feature called <a class="reference external" href="https://en.wikipedia.org/wiki/Trusted_Computing#Remote_attestation">remote attestation</a>. Essentially, if Alice and Bob are communicating on the Internet, the trusted computing achieved by SGX allows Alice to prove to Bob that she is running a certain piece of code.</p>
<p>Trusted hardware is usually seen as a way to prove that you are running code that will not be malicious: for example, it is used <a class="reference external" href="http://www.trulyprotect.com/docs/PII.pdf">in DRM</a> to prove that a user will not copy files that are only temporarily licensed to them, like movies. Instead, we will use trusted hardware to <em>shackle</em> cryptocurrency users, paying or forcing them to use cryptocurrency wallets based on trusted hardware that <strong>provably restrict their space of allowed behaviors</strong> (e.g. by forcing them not to vote a certain way in an election) or <strong>allow the vote buyer trust-minimized but limited use of a user’s key</strong> (e.g. a vote buyer can force a user to sign “I Vote A”, but cannot steal or spend a user’s money).</p>
<p>The simplest way to deploy such technology for vote buying is to simply allow users to prove they are running a vote buyer’s malicious wallet code in exchange for a payment, secured on both sides by remote attestation technology.</p>
<p>In our “EZVote” example, a user would simply use a cryptocurrency wallet loaded on Intel’s SGX, running the vote buyer’s program. SGX would guarantee to the user that the wallet could never steal the user’s money (unless Intel colludes with the vote buyer). The user can provably use the wallet for everything they can do with a normal Ethereum wallet, including moving their money out (though in this case they would not be paid). The user runs their own wallet, and does not need to trust a third party for control or security of their funds. The user may not need to trust even Intel or the trusted hardware provisioner for security of their funds, as they can compile their own wallet!</p>
<p>When a predefined trigger condition occurs, such an SGX program would automatically vote on EZVote as the vote buyer commands, and send a receipt to the vote buyers. The vote buyer would itself be run an SGX enclave that maintains a total of all users who claim to have voted yes, and a list of their addresses. Given trust in SGX, the vote buyer need not see the full list of member users or know the total pledged amount. At the end of the vote, the vote buyer’s enclave would pay all the users who have not moved their funds or changed their vote. This would be accomplished by the enclave periodically posting a Merkle root summarizing users to be paid on-chain, providing proof to each user that they will eventually be paid. Users can claim payment after the expiry of some period by providing a proofs of inclusion in the posted Merkle history. In some particularly vulnerable vote designs, an SGX enclave can increase its efficiency by simply accumulating “yes” votes from users up-front as transactions, publishing and providing payment for them at the conclusion of the vote.</p>
</div>
<div class="section" id="hidden-trusted-hardware-cartels-dark-daos">
<h3>Hidden Trusted Hardware Cartels (Dark DAOs)</h3>
<p>A more concerning attack arises when trusted hardware is combined with the idea of a <a class="reference external" href="https://en.wikipedia.org/wiki/Decentralized_autonomous_organization">DAO</a>, spawning a trustless organization whose goal centers on manipulating cryptocurrency votes.</p>
<div class="figure">
<img alt="Overview of example architecture for a simple SGX-based Dark DAO" class="no-border" src="https://hackingdistributed.com/images/2017-bitcoin/vv2.png" style="width: 500px;" />
<p class="caption">One example of a basic Dark DAO.</p>
</div>
<p>The figure above outlines one possible architecture. Vote buyers would support the DAO by running a network of SGX enclaves that themselves execute a consensus protocol (shown here as a dark cloud to indicate its invisibility from outside). Users would communicate with this enclave network, and supply proof that they are running a “vote buying” (e.g.) Ethereum wallet with a current balance of X coins. This “evil wallet” attests to running the attack code a vote buyer is paying for, and the vote buyer attests that they are running code guaranteed to pay the user at the end of the attack (likely in combination with a smart contract-based protocol that cryptoeconomically enforces liveness and honesty).</p>
<p>The vote buyers can keep track of how many total funds are pledged to vote through the system, hiding this fact from the outside world using privacy features built into SGX. Users can receive provable payouts for participating in such a system, achieving a property similar to all-or-nothing settlement in <a class="reference external" href="https://eprint.iacr.org/2017/1153.pdf">SGX-based decentralized exchanges</a>. Vote buyers can get a provable guarantee that clients will never issue votes that contradict their desired voting policy.</p>
<p>What makes such an organization <em>dark</em> is that the vote buyers need not reveal how many users are participating in the system to <em>anybody</em> (even potentially themselves). The system could simply accumulate users, paying users for running the attacker’s custom wallet software, until some threshold (of e.g. coins held by such software) is reached that activates an attack; in this manner, failed attempts need not be detectable. More damagingly, the individual incentives of any small users clearly point towards joining the system. If small users believe their vote doesn’t matter, they are likely to take the payoff with no perceived marginal downside. This is especially the case in on-chain votes, which are <a class="reference external" href="https://www.reddit.com/r/eos/comments/8r3xen/eos_mainnet_is_live_169_voter_turnout_congrats/">empirically observed to have extremely low turnout</a>. Users that don’t vote may be ideal targets for selling their votes.</p>
<p>Dark DAO operators can further muddy the waters by launching attacks on choices the vote buyers actually <em>oppose</em> as potential false flag operations or smear campaigns; for example, Bob could run a Dark DAO working in Alice’s favor to <a class="reference external" href="https://twitter.com/realdonaldtrump/status/787699930718695425">delegitimize the outcome of an election</a> Bob believes he is likely to lose. The activation threshold, payout schedule, full attack strategy, number of users in the system, total amount of money pledged to the system, and more can be kept private or revealed either selectively or globally, making such DAOs ultimately tunable for structured incentive changes.</p>
<p>Because the organization exists off-chain, no <a class="reference external" href="https://www.ethnews.com/arbitration-forum-orders-more-eos-accounts-frozen-but-apparent-hoax-muddies-waters">cartel of block producers</a> or other system participants can detect, censor, or stop the attack.</p>
<p>Such a dark organization has several immediate practical drawbacks. The primary one is that for use on Intel SGX, a license would need to be granted by Intel, an unlikely event for malicious software. Furthermore, side channel, hidden software backdoor, or platform attacks in Intel's SGX or the auditing of the Dark DAO wallet could weaken the scheme, though as trusted hardware continues to advance and develop, it is highly likely the cost of such attacks will increase substantially. Eventually, we expect other trusted hardware to provide the remote attestation capabilities of Intel SGX, meaning that SGX will not be required for such an attack; this is why we use “SGX” interchangeably with “trusted hardware”. For example, remote attestation is achievable <a class="reference external" href="http://profsandhu.com/zhang/pub/trust10-android.pdf">on some Android secure processors</a>. Our schemes work on any hardware device allowing for confidential data and remote attestation.</p>
</div>
</div>
<div class="section" id="attacks-on-classic-schemes-carbonvote-eip999">
<h2>Attacks on Classic Schemes: CarbonVote & EIP999</h2>
<p>To prove the efficacy of these vote buying strategies, we first look at governance-critical coinvotes performed in existing cryptocurrency systems. Perhaps the most important such vote was the <a class="reference external" href="http://v1.CarbonVote.com/">DAO CarbonVote</a>. The operation of this vote was simple: accounts sent money to an address to vote yes, and another to vote no. Each address was a contract that logged the vote of a given address. The CarbonVote frontend then tallied the votes, and showed the net balances of all accounts that had voted yes and/or no. Later votes superseded earlier ones, allowing users to change their minds. At the end of the vote, a snapshot was taken of support and used to gauge community sentiment. This voting style is being reused for other controversial ecosystem issues, including <a class="reference external" href="http://CarbonVote.com/">EIP-186</a>.</p>
<p>One possible trust-minimizing vote buying smart contract in this framework involves the use of escrow; users send Ether to an ERC20 token contract that holds the Ether until the end of the vote. For each Ether they deposit, users receive 1 VOTECOIN.</p>
<p>The contract is pre-programmed to vote yes at the end of the vote with 100% of the user Ether held. After the vote ends, each VOTECOIN token becomes fully refundable for the original Ether that created it. Users get back their original Ether, plus any bribes that vote buyers wish to pay them for this service.</p>
<p>We have implemented <a class="reference external" href="https://gitlab.com/relyt29/votebuying-CarbonVote">a full, open-source proof of concept</a> of such a contract, enabling any vote buyers to contribute funds to the contract’s BRIBEPOOL. Users can be paid out from BRIBEPOOL by temporarily locking their Ether in the contract, and can reclaim 100% of their Ether at the end of the target vote. An attack can pay vote sellers out of BRIBEPOOL upfront (once they lock the coins, the votes are guaranteed), as dividends over time, or both.</p>
<div class="figure">
<img alt="Code of the vote buying Ethereum smart contract for the DAO Carbonvote" class="no-border" src="https://hackingdistributed.com/images/2017-bitcoin/vv3.png" style="width: 500px;" />
<p class="caption">Code of the vote buying Ethereum smart contract for the DAO Carbonvote</p>
</div>
<p>Users can also sell their VOTECOIN after locking up their Ether, essentially making VOTECOIN a tokenized vote buying derivative. Vote sellers can then instantly unload their exposure to any risks introduced by funds lockup to parties that are indifferent to the vote’s outcome: because each ERC20 is programatically guaranteed to eventually receive all original ETH, this essentially creates a one-way-only funnel from the base asset into a derivative asset dedicated to voting a predefined way. Buyers who are uninterested in the vote's outcome should always lock their ETH if guaranteed a non-negative payoff, and essentially have an option to later unload onto other similarly uninterested buyers. If dividends from BRIBEPOOL are paid over time to VOTECOIN in addition to upfront, these derivative tokens can even be used to speculate on the success of the attack itself.</p>
<p>This smart contract can be simplified with the use of oracles such as <a class="reference external" href="http://www.town-crier.org/">Town Crier</a> (multiple oracles, prediction markets, etc. can be combined as well). Because the CarbonVote system publishes results including full voter logs <a class="reference external" href="https://etherscan.io/address/0x71caff5fd6facbaf1863426ac05b3703636a9bb9#events">on Etherscan</a>, it is relatively trivial to check which way someone has voted using any external web scraping oracles, paying them if their vote included in the final snapshot agreed with the buyers’ preference.</p>
<p>A Dark DAO-like model can also trivially be used. Each user simply runs a wallet that, some time after each transfer transaction, also votes the desired way on the CarbonVote (in fact this may become standard behavior for many wallets). The user is only paid if such votes are registered, so the user is incentivized to make sure this vote transaction is included on-chain. There is no way for the network to tell how many votes in a given CarbonVote are generated by such a vote buying cartel, and how many are legitimate.</p>
<p>Inherent in any of these schemes is the ability to minimize trust when pooling assets across multiple vote buyers; bribery smart contracts could simply allow anyone to pay into the BRIBEPOOL, and SGX networks can be architected similarly for open participation.</p>
<p>Some schemes, such as the <a class="reference external" href="https://www.etherchain.org/coinvote/poll/35/">EIP999 vote</a>, have even more severe problems. In these schemes, if a user votes twice, the later of such votes is chosen. A simple and severe attack is then to simply collect signatures on both “yes” and “no” votes from a user, spamming the chosen signature towards the end of the election period and relying on an ability to overwhelm the blockchain to ensure that most such votes persist. Alternatively, because contract deployers are able to vote for all the funds in a given contract, another attack is to simply force a user to use a contract-based wallet for the duration of the vote that is deployed by the vote buyer, who can then control the votes of all funds locked in contracts arbitrarily without custody of these funds.</p>
<p>Bitcoin is not immune to this problem either. Bitcoin’s community often <a class="reference external" href="https://news.ycombinator.com/item?id=11228875">leans on coin-votes</a>, and similar vote buying schemes can be applied (as either Ethereum smart contracts as in <a class="reference external" href="https://fc18.ifca.ai/bitcoin/papers/bitcoin18-final14.pdf">this work</a>, or in Dark DAO-style; Bitcoin itself does not provide native support for sufficiently rich contracts to buy votes).</p>
</div>
<div class="section" id="beyond-voting-attacking-consensus">
<h2>Beyond Voting - Attacking Consensus</h2>
<p>Astute readers may point out that all permissionless blockchains inherently rely on some form of permissionless voting, namely the consensus algorithm itself. Every time a blockchain comes to <a class="reference external" href="https://medium.com/@chrshmmmr/consensus-in-blockchain-systems-in-short-691fc7d1fefe">global consensus on some attributes of state</a>, what is taking place is essentially a permissionless (often coin or PoW-weighted) vote in a permissionless setting.</p>
<p>It is perhaps no surprise that “vote buying” has seen some exploration in these contexts. For example, smart contracts on Ethereum <a class="reference external" href="https://fc18.ifca.ai/bitcoin/papers/bitcoin18-final14.pdf">can be used</a> to attack Ethereum and other blockchains through censorship, history revision, or incentivizing empty blocks. Such attacks work directly on the proof-of-work vote itself, bribing miners according to their weighted work. There is little reason to believe that proof of stake systems would be immune to similar attacks, especially in the presence of <a class="reference external" href="http://docs.bitshares.org/bitshares/dpos.html">complex delegated voting structures</a> whose incentives may be unclear and whose formal analysis may be incomplete or nonexistent.</p>
<p>A disturbing concept related to our exploration of Dark DAOs for vote buying is what we term the “Fishy DAO”, named after the <a class="reference external" href="http://fishy-flash-game.com/">classic flash game</a>. In this (super fun!) game, you start out as a small fish. The rules are simple; you can eat smaller competitor fish, but not fish the same size as or larger than you. You get a little bit bigger after each meal, until you eventually (if you are lucky) grow to dominate the ocean. A modern equivalent that doesn’t require Flash and adds networking is <a class="reference external" href="http://agar.io/#ffa">agar.io</a>.</p>
<div class="figure">
<img alt="Picture of the Flash game “Fishy”" class="no-border" src="https://hackingdistributed.com/images/2017-bitcoin/vv4.jpg" style="width: 500px;" />
<p class="caption">It’s like Fishy, but the small fish can gang up on the bigger ones too!</p>
</div>
<p>A Fishy DAO would use Dark DAO-like technology as described above to do the same for blockchains. Using SGX, Fishy DAO members can receive non-transferable (DAO members can verify message authenticity, but non-members cannot tell if a message is forged) notifications when an attack threshold is reached, allowing them to short currency markets shortly before such an attack. Each blockchain Fishy DAO attack brings some profit to Fishy DAO, and the ensuing publicity of even failed attacks gives Fishy DAO notoriety with the profit-seeking but perhaps unethical (in some frameworks). If Fishy DAO fails to achieve required thresholds, Fishy DAO simply fades away and refunds its participants, potentially but not necessarily burning some amount of their money to incentivize them to recruit participation.</p>
<p>Fishy DAO requires Dark DAO technology, as if performed in the open with a smart contract, observable participation rates would provide market signals to the underlying blockchain’s price, rendering the attack unprofitable by allowing risk to be priced in. It is the cryptographically enforceable information asymmetry between DAO members and wider ecosystem participants that makes such an attack feasible.</p>
</div>
<div class="section" id="other-applications">
<h2>Other Applications</h2>
<p>Note that Dark DAOs have implications <em>far</em> beyond the above. Consider for example a Dark DAO that aimed to profitably buy users’ basic income identities, paying up front at a small fee to receive a user’s regular basic income payments. Or a Dark DAO for getting through credit checks secured on key-based identities by leasing (with trust minimized limitations) such keys from users with good credit. Or a Dark DAO that runs an evil mining pool, provably attacking an ASIC-based proof of work cryptocurrency with an unstoppable attack pool of potentially undetectable size.</p>
<p>One can also imagine that with identity, there may be social safeguards against buying behavior in the identity system itself. For example, some identity systems may allow a user to show up in person to revoke or manage identities, which could socially circumvent automated technical safeguards against identity theft. There are still ways around this: the classic solution in loans is through collateral. Potentially a "bondsman" like business could also provide social guarantees of repayment through physical/legal intimidation and contract for users who cannot afford collateral. Payday loan and bail bond establishments would be ideally suited for that kind of business if such a permissionless basic income system were ever deployed alongside current market systems, at least in the US (in many other places there are likely even less savory institutions that could be willing to step in for an appropriate cut).</p>
<p>The coordination space of mechanisms in blockchains is large, and the environment hostile. All voting or financially incentivized identity-based schemes should be very careful to consider the implications of the underlying permissionless model on long-term viability, scalability, and security.</p>
</div>
<div class="section" id="core-insights">
<h2>Core Insights</h2>
<p>Maybe you are an academic skimming this article, or maybe an interested user wondering exactly what this all means. There are a few interesting and very surprising (in the research literature) insights to be gleaned from our thought experiments above:</p>
<ol class="arabic simple">
<li><strong>Permissionless e-voting *requires* trusted hardware</strong>. Perhaps the most surprising result is this one. In any model where users are able to generate their own keys (required for the "permissionless" model), low coordination bribery attacks are inherently possible using trusted hardware as described above. The only defense from this is more trusted hardware: to know a user has access to their own key material (and therefore cannot be coerced or bribed), some assurance is required that the user has seen their key. Trusted hardware can do this through either a trusted hardware token setup channel (similar to governments using electronic votes for democracy), or through an SGX-based system that guarantees that any voters have had their key material revealed to whatever operating system they are running. This inherently implements the kind of trusted setup/generation assumptions <a class="reference external" href="https://www.cs.cornell.edu/~clarkson/papers/clarkson_duvote.pdf">academic e-voting schemes have been using for years</a>. Clearly, in the presence of trusted hardware, such assumptions are <em>required</em> for any vote, and votes can be provably bought/sold/bribed/coerced with low friction in the absence of this assumption, a surprising result with severe implications in on-chain voting.</li>
<li><strong>The space of voting and coordination mechanisms is massive and extremely poorly understood</strong>. As explored through concrete examples on how to handle e.g. smart contracts voting and vote changes on Ethereum, it is clear that a wide range of design decisions fundamentally alters the incentive structures of voting mechanisms (we explore these in Appendix A below). These mechanisms are extremely complex, and can have their incentive structures altered by other coordination mechanisms like smart contracts and trusted hardware-based DAOs. The properties of these mechanisms, especially when multiple such mechanisms interact or are actively attacked by resourced actors, is extremely poorly understood. No mechanism of this kind should be used for direct on-chain decision making any time soon.</li>
<li><strong>The same class of vote buying attacks works for any identity system</strong>. These attacks are not only for votes. Imagine an identity system which gives users the right to a basic income, paid weekly. I can simply pay you cash up front to buy your identity and therefore share of income for the next year, and indeed should do so if my time value of money is lower than yours (as wealth asymmetries often imply). This is the case for any system involving identity: with relatively low trust, the behavior of user identities can be constrained, and such constraints can be bought and sold on the open market. This has severe and fundamental impact on the robustness of any on-chain economic mechanism with a permissionless identity component.</li>
<li><strong>On-chain voting fundamentally degrades to plutocracy</strong>. Voting and democracy fundamentally relies on secret ballot assumptions and identity infrastructure that exists only in meatspace. These assumptions do not carry over to blockchains, making the same techniques fundamentally broken in a permissionless model. External, even trusted, identity systems again do not address the issue as long as users can generate their own keys (see above).</li>
<li><strong>Hard fork-based governance provides users the only exit from such plutocracy</strong>. A natural question to ask given the above is whether we've already arrived at plutocracy. The answer is "probably not". There is some evidence that the <a class="reference external" href="https://pdaian.com/blog/stop-worrying-love-etc/">ad-hoc, informal, fork-based governance models that govern blockchains like Bitcoin and Ethereum actually provide robust user rights protection</a>. In this model, any upgrades must offer the user an active choice, and groups of users can choose to opt out if disagreeing with rule changes. On-chain voting, on the other hand, creates a natural default that, especially when combined with inattentive or uncaring users, can create strong anti-fork inertia around staying with the coinvote.</li>
<li><strong>Multiple blockchains interacting can break the incentive compatibility of all chains</strong>. Importantly and critically, the Fishy DAO style attack we've explored shows that multiple competing blockchains has the ability to fundamentally affect the internal equilibrium of all such chains. For example, in a world with only one smart contract system, Ethereum, internal incentives may lead to stable equilibria. With two players, and the underdog incentivized to launch a bribery attack to destroy their competitors, such equilibria can be disrupted, changed, and destroyed. A critical and surprisingly underexplored open area of research is modelling the macroeconomics of competition between blockchains, gaining insight into how exactly such internal equilibria can fail. We find it intuitively ~certain that critical black swan events are currently lurking in the complexity of blockchain governance and interoperability.</li>
</ol>
<p>Obviously, these all require further exploration, tweaking, and proof. But I think we have at least provided some intuition for why we believe the above to hold in a principled analysis framework.</p>
</div>
<div class="section" id="conclusion">
<h2>Conclusion</h2>
<p>The trend of on-chain voting in blockchain is inspired by the long human tradition of voting and democracy. Unfortunately, safeguards available to us in the real world, such as enforced private/deniable voting, approximate identity controls, and attributability of widespread fraud are simply not available in the permissionless model. When public keys generated by the users themselves are used, on-chain voting is not able to provide guarantees about these users having any anti-coercion guarantees. Elaborate voting schemes do little to quell (and in many cases indeed aggravate) the problem. On-chain voting schemes further complicate incentives, creating an unstable and tangled mess of incentives that can at any time be altered by trustless smart contract or Dark DAO-style vote buying, bribery, and griefing schemes.</p>
<p>We encourage the community to be highly skeptical of the outcome of any on-chain vote, specifically as on-chain voting becomes an ever-important staple of decision making in blockchain systems. The space for designing mechanisms that enable new forms of abuse with lower-than-ever coordination costs supports the position that votes should be used for signals not decisions, and that a wide variety of voting mechanisms should fill such roles. Without such safeguards, it remains possible that all on-chain voting systems degenerate into plutocracy through direct vote and participation buying and even vote tokenization.</p>
<p>Such attacks have substantial implications for the future security of all blockchain-based voting systems.</p>
</div>
<div class="section" id="acknowledgements">
<h2>Acknowledgements</h2>
<p>We’d like to thank <a class="reference external" href="https://twitter.com/paddyucl">Patrick McCorry</a> for his helpful, thorough feedback throughout the lifecycle of this post, and pioneering work in vote buying and on-chain voting systems.</p>
<p>We also thank <a class="reference external" href="https://twitter.com/OmerShlomovits">Omer Shlomovits</a> and <a class="reference external" href="https://twitter.com/istvan_a_seres">István András Seres</a> for their helpful comments on <a class="reference external" href="http://eepurl.com/dyf3WD">early access versions</a> of this post.</p>
</div>
<div class="section" id="appendix-a-on-chain-vote-differentiators">
<h2>Appendix A - On-chain Vote Differentiators</h2>
<p>We notice several distinguishing factors in on-chain voting systems:</p>
<ol class="arabic simple">
<li><strong>Vote-changing ability:</strong> If users cannot change their vote, trivial vote buying is possible with any method that provides a cryptographically checkable receipt. A smart contract can simply bribe users up-front for their vote, which can now never be changed. Most schemes, however, allow users to change or withdraw their votes, meaning bribery needs some continuous time component (or to be done after a snapshot of the vote is taken). Exponentially increasing payouts over time provide an interesting solution that discourages coin movement and encourages long-term signaling, and payout bonuses at vote completion are tools potential vote-buyers can use to create viable vote buying schemes when users <em>are</em> allowed to change votes.</li>
<li><strong>Smart contract/delegated voting:</strong> Who gets to vote for funds stored by smart contracts? This is an open question that plagues existing designs; the original CarbonVote allows any contract that can call a function to vote and later change its mind. The EIP999 vote <a class="reference external" href="https://twitter.com/edmundedgar/status/987300021845442560">allows contract deployers</a> to vote on behalf of contracts, a decision widely criticized as being <a class="reference external" href="https://www.reddit.com/r/ethereum/comments/8e33u9/psa_most_of_the_yes_votes_on_eip999_are_from/">intended to sway vote outcomes</a>. However, neither design seems ideal. Indeed, it seems intuitively difficult for a single design to capture all the custody nuances in smart contracts fairly: funds-holding smart contracts can range from simple multisignature accounts to complex decentralized organizations with their own revenue streams and inter-contract financial relationships. Which of these coins have voting rights, and how to fairly assign these rights remains an entirely unexplored philosophical requirement for building a fair on-chain voting system. Forcing contract authors to provide explicit functionality is likely also insufficient, as the very requirements of this functionality can in the future change without backwards compatibility (through either chain voting or forks).</li>
<li><strong>Deniability/provability:</strong> All of the schemes explored in this article have features which make them particularly amenable to vote buying: they provide the voter with some form of trust-minimizing cryptographic proof of their vote, either through an on-chain log, a secured web interface, or a smart contract’s state. Such schemes are particularly vulnerable to vote buying, as they make it easy for smart contract-style logic to validate votes. Some <a class="reference external" href="https://ieeexplore.ieee.org/abstract/document/4561221/">traditional e-voting schemes</a> in academic literature provide a property known as coercion resistance. In these schemes, a user <a class="reference external" href="https://eprint.iacr.org/2002/165.pdf">is able to change their mind post-coercion</a> using the key they use for voting, and votes are not attributable to individual users. <strong>In general, the privacy concerns of having votes associated with any kind of long-standing identity, especially those holding coins, are severe.</strong> Such concerns would be completely disqualifying for any serious voting systems in the real world, and probably should be disqualifying in all thoughtful on-chain voting design criteria.</li>
</ol>
</div>
https://hackingdistributed.com/2018/07/02/on-chain-vote-buying/Choose-Your-Own-Security-Disclosure-Adventure2018-05-30T10:15:00Z2018-05-30T10:15:00ZEmin Gün Sirer<p>I have been thinking about bugs, responsible disclosure, crowd behaviors and ethical responsibilities. This blog is a choose-your-own-adventure game for security researchers. I want to share some simple yet common scenarios with you, and outline our various options as responsible computer scientists.</p>
<p>My main point will be that there are no good options: no matter what you do in the choose-your-own adventure game, <em>everyone</em> dies. The trolley, the fellow at the switch, the folks on the line, and the folks on the other line as well. And they die because we are, collectively, acting like morons. You and I and everyone else is complicit. This post is an attempt to ask that the crowd learn to demand the right kind of assurance from software, and the right kind of behavior from all the parties involved, because we can do better.</p>
<div class="figure align-right">
<img alt="When you were having pre-marital sex, he mastered the blockchain." class="no-border" src="https://hackingdistributed.com/images/2017-bitcoin/the-blade.jpg" style="width: 200px;" />
<p class="caption">But what if he really did?</p>
</div>
<p>BTW, as you read the scenarios outlined below, I'm sure you'll be convinced that I'm criticizing some specific coin or project, except no two readers will agree on which coins.
This post is not about <a class="reference external" href="https://www.bloomberg.com/features/2017-the-ether-thief/">The DAO</a> or the-coin-which-cannot-be-named-or-else-they-conjure-a-butthurt-online-brigade and also no-not-<em>that</em>-one-the-<em>other</em>-one or even oh-my-god-they-all-do-that. It's about all of us. The entire set of scenarios are synthetic -- an amalgamation of Sorry-For-Your-Loss (SFYL) events I've seen play out in cryptocurrencies over the years. No need to make it personal, it already is.</p>
<div class="section" id="the-setup">
<h2>The Setup</h2>
<p>Imagine that there is a software project out there, developed by some group that is not you. Let's assume that they have found a way to monetize this process.
Maybe they are offering consulting services, maybe they are profiting directly by building a cryptocurrency and selling some tokens, maybe they are directing
users to side services that they operate to finance this process. Somehow, they are making money off of this thing.</p>
<p>You, of course, are not making money off of their thing. You have no connection to them, no legal obligations. This post is about your ethical obligations, the things you have to do so you can sleep well at night and occasionally visit your relatives' graves without feeling like an utter disappointment.</p>
<p>Now imagine that you know of a fundamental flaw in this project.</p>
<p>What do you do?</p>
<ol class="upperalpha simple">
<li>Keep quiet, do nothing.</li>
<li>Keep quiet and exploit the bug.</li>
<li>Speak up and say something.</li>
</ol>
<p>Before you answer, let's fill in with some realism.</p>
</div>
<div class="section" id="your-backstory">
<h2>Your Backstory</h2>
<p>Let's add some details about you, with absolutely no loss of generality. The following backstory is almost always universal.</p>
<p>You didn't just happen to chance upon this flaw. No one just chances upon a flaw that has eluded the main developers who have a profit motive to ferret out the bugs.
You spent years of effort, building specialist expertise.
When the developers were throwing a pre-pre-launch party, you were combing arcane papers by Lamport.
When the dev team was out making it rain ICO-cash in the club, you were stepping through similar code with gdb.
And when they were out having "pre-marital sex," as the meme goes, you were thinking about techniques to avoid this particular flaw.</p>
<p>But let's be realistic: you don't actually know for certain that there is a flaw in this specific system right at this time. Sure, you're an expert at this problem,
sure, it certainly feels like they have it, but you have not (yet) combed through this particular system.
You have a 99% suspicion that they might have this flaw.
But there's a 1% chance that they have some other mechanism in place that makes it impossible to exercise the flaw.</p>
</div>
<div class="section" id="their-backstory">
<h2>Their Backstory</h2>
<p>Again without loss of generality, the following is almost universal.</p>
<p>The developer team was working in a hot area with a lot of competition. They could have spent an extra year or two on their project to make sure it is
well-executed. But, ya know, the space is really competitive. Time is short, besides, the Hooha Coin has a whitepaper that sounds exactly alike, with phrases
lifted directly and updated with additional buzzwords, so there is no time, they need to hit the market.</p>
<p>The developer team sought advice on business development from successful role models. This really means people who happened to be at the right place at the right
time, who have now cashed out and style themselves as successful VCs. As of 2018, it was commonly accepted wisdom to "fake it 'till you make it." They literally
told young people to engage in fraud.</p>
<p>Some people on the dev team had been through business school. The main thing they learned in business school was how to network, but the second thing they learned
was how to cut corners, how to short-change people on implicit expectations and pocket the difference.
There is no law to hold you back from going to market with a possibly broken protocol as long as you yourself do not actively know, for certain, that it is definitively broken.</p>
<p>Finally, the dev team has followed best practices, as taught in school. They used Mongo but they made the port non-public <a class="footnote-reference" href="#id2" id="id1">[1]</a> . They did not invent their own crypto.
At the risk of going on a side-rant: what is it with disciplines that teach people to stay away from their discipline? Why is it that cryptographers get to tell
people to leave cryptography to others, but somehow it's OK to build your own consensus protocol, "eventually-consistent" NoSQL engine that is actually plain old
inconsistent, or your own programming language with weird "wat?" semantics?
Anyhow, the dev team did whatever it is that we teach at your average top-10 school, the end result of pointless faculty meetings and that special
out-of-touchness that comes from the lack of a mandatory retirement age for faculty in the US.</p>
<p>We can now see what happens when you speak up, or don't. In what follows, bold indicates options, NMD means "No More Decisions" down that path, The End means the book ended and everyone lost.</p>
</div>
<div class="section" id="a-you-keep-quiet">
<h2>A. You Keep Quiet</h2>
<p>You watch the system get deployed with great fanfare. Someone else finds the same flaw. [Restart, from their perspective]</p>
</div>
<div class="section" id="b-you-exploit">
<h2>B. You Exploit</h2>
<p>The weather is really nice in Thailand.
You tell yourself that <a class="reference external" href="https://twitter.com/el33th4xor/status/1001632762561007621">2 out of 3 people would take advantage of a serious exploit instead of reporting it</a> if they could get away with the proceeds. 1 out of 5 would use an exploit if it made them $1 more than the bounty.</p>
<p>The border crossings into the US to see living relatives cause a slight feeling of anxiety, but you reassure yourself with "Code is Law, bro."
You occasionally get the feeling that your dead relatives are judging you hard, but the hotel staff know this and keep you well supplied with Mai Thais.
[The End]</p>
</div>
<div class="section" id="c-you-speak-up">
<h2>C. You Speak Up</h2>
<p>You mention on Twitter that the protocol may be buggy. What happens next depends on your reach.</p>
<p><strong>You have no following</strong>. No one pays any attention to you. The bug is exploited by an address connected to other hacks laundered on BTC-e, and everyone loses money. You are an utter disappointment. Doubly so for not exploiting the hack yourself. [The End.]</p>
<p><strong>You seek help from someone with a large following</strong>. He somehow read your nicely composed letter among the dozens of kooky messages they get every day, some involving a guy who sold his heater and wants ICO recommendations for his $150, and many involving whitepapers with the exact same bug as what you disclosed. Now he knows what you know, and has a large following. [Restart and follow along from his perspective].</p>
<p><strong>You have a large following</strong>. Crowds upvote your post. It makes it to the top of Hacker News. You get 100,000 viewers, approximately 10 times the scrutiny in one day that a typical, award-winning faculty member gets over his or her life's work.</p>
<p>That one guy who is upset at you because you previously pointed out <em>his</em> errors posts something snide. Crowds vote it up because they like drama, and he has numbered references that lead to non-sensical web pages and IRC logs that do not actually support whatever inanity he is spouting. Even though no one actually clicks on the links, a blue link is considered a full refutation regardless of its destination. You ignore him and all the idiots who upvoted him. [NMD]</p>
<p>The aura of negativity created by the previous guy has worked. A friend posts in the HN thread and joins ranks with him. She drops in a snide reference to something personal, like your ethnic origin. You ignore her, knowing that she will come before you in a few years seeking a job. You contemplate what you'll do then: will you remind her of this, or will you never speak of her behavior? You now have two ethical dilemmas. [NMD]</p>
<p>Ok, the developer team goes into overdrive, immediately denying that there is a flaw. They point out that you do not have an exploit. They call your post FUD, and you a shill for the perceived competition. This seems wrong, because you did not have time to short these guys or long the competition, because you were actually thinking about the exploit, while these guys seem to be spending all their time day trading coins.</p>
<p>The people calling you out for FUD are getting aggressive. While your inbox consists 97% of people saying "thank you for what you do," the remaining 3% is
full of vitriol and kind of annoying.</p>
<p>[Optional Side Story]
You receive an unmarked package in the mail. Not knowing if it's a bomb, you put it on a steel table, crouch, and, holding the scissors with your left hand, open it. As you do this, you curse the day you said anything in public. .... It turns out to be dessert from a fan. You are immediately relieved. It's your favorite kind, too. You gaze at it and laugh at yourself for thinking it was a bomb... You have a jolly good time for 5 minutes, after which you realize that the dessert might be poisoned. You offer it to your colleagues, wondering how much plausible deniability you have if they keel over. [NMD]</p>
<p>In response to FUD allegations, do you:</p>
<p>C1. Ignore them and spend your time on other endeavors instead of building an exploit.</p>
<p>C2. Spend the time building a full exploit.</p>
</div>
<div class="section" id="c1-you-build-no-exploit">
<h2>C1. You Build no Exploit</h2>
<p>Someone else writes the exploit for your flaw.</p>
<p><strong>He uses it to steal everyone's money</strong>. Everyone lost. You are widely criticized for making the issue public and enabling the theft. [The End]</p>
<p><strong>He does not use it to steal everyone's money</strong>. [Go to C2, from his perspective]</p>
</div>
<div class="section" id="c2-you-build-an-exploit">
<h2>C2. You Build an Exploit</h2>
<p>You already had a day job. Now you're working for people who never engaged you, paid you, or will compensate you. They will never even be grateful to you.
You're basically donating your cycles to a bunch of undeserving people.
The guy at the garage with the specialized OBD2 reader won't connect it to your car for less than $15, and here you are, in the highest-tech field,
clocking in free hours for someone who is already upset at you for speaking up.
You feel bad for being coopted into someone else's broken money making scheme.</p>
<p>What happens next depends on the quality of your exploit.</p>
<p><strong>Your exploit does not work, the bug isn't there</strong>. It was the 1% case. Some random other feature made your bug unexploitable, so you look like a fool.</p>
<p>Two years later, the developers go on to build another system, but this time omit the circumstantial feature that kept the bug from being exploited.
They lose everyone's money.
You feel vindicated, and happy even, which only exacerbates the unease you feel during visits to the family cemetary. [The End]</p>
<p><strong>The bug is present, but you realize it would take too long to develop an exploit</strong>. You have better things to do. The developers pretend that it's the 1% case.
They are lying, of course.</p>
<p>Two years later, some no-talent security researcher rediscovers the exact same flaw you pointed out. He is unemployed and has the time to develop an exploit.
He does not give you any credit.
Today, when you google keywords for your own post, his posts come up as the first links. It feels weird.
When you see him at a conference, he scampers away.
People ask you how this daft guy knows so much about subtleties in your expertise area. [Restart at C2 from his perspective]</p>
<p><strong>Your exploit works, you release it publicly</strong>. Your colleagues pull you aside and lecture to you about responsible disclosure.
They tell you exactly how you should have run your life, how you have an obligation to your own following, and what they themselves would have
done had they ever been in your position, but, you see, it's hard to get out of armchairs.</p>
<p>Someone uses your exploit to steal everyone's money. The devs, which previously called on you to develop an exploit, now attack you for developing an exploit.
An FBI agent leaves a message wanting to talk to you, and you cannot tell if it's because you have interesting insights, or because you're a person of interest.
Everyone lost. [The End]</p>
<p><strong>Your exploit works, you give it to the developers</strong>. What happens next depends on whether the developers are hostile or not.</p>
<p><strong>C2a. The devs are actively hostile, deny that it works</strong>. They start peppering you with mumbo jumbo. You have no time for this.
The developers suddenly release an extensive patch, mentioning you in a byline, but also say that your exploit never actually worked.
Someone who has been watching their code repository duplicates your exploit and steals money from unpatched installations.
Everyone loses. [The End]</p>
<p><strong>C2b. The devs are passively hostile, ignore you</strong>. After a few weeks, you release your exploit.
The aftermath is exactly the same as the case where you release the exploit immediately upon
writing it. There is no universally agreed-upon waiting period, nor is a single period suitable for all bugs and all projects. No matter what you do, you will be treated
as if you revealed the exploit irresponsibly. [The End]</p>
<p><strong>C2c. The devs are nice people</strong>. They acknowledge you, they give you a bounty, and release a patch.</p>
<p>[Optional side story]
But at about the same time when you notified the devs of the bug, others heard your exact idea, repackaged it using different words, and
now you have to split the bounty. Interestingly, you share the bounty with someone
who suggested a way to rewrite the C code that does not impact the produced binary at all -- the devs who were too incompetent to fix the problem are too incompetent
to tell who had a material fix and who was just making noise. [NMD]</p>
<p>You feel like you did the right thing. But you took a lot of risk -- there were many paths along the way where this could have turned out differently.</p>
<p>The bounty is a flat $10k. You figure that you put in about 30 hours just in developing the
exploit itself. $333/hour is far below your consulting rate, but it seems like a reasonable figure.
Yet, like an Uber driver who doesn't realize that the amount he's
getting paid per hour doesn't cover the upkeep of his vehicle, you do not realize that you forgot to charge for all the time and effort required to get you
to the point where you were able to recognize the error in the first place. When you factor that in, you made around $3/hour.
You wonder why there are very few older folks who do what you did.</p>
<p>The exploit would have cost tens to hundreds of millions. You wonder if it would have made more sense to just use the exploit.</p>
<p>Meanwhile, someone else finds a different bug in the same system, and just uses it to steal money.
The dev team is perplexed. They know <em>they</em> are nice people, unlike all those other dev teams who deny the existence of bugs and run PR campaigns against
the people who call out problems. They know <em>they</em> treated you nicely. They have a Slack channel full of people, unlike everyone else who runs a Slack
channel of shills and mobbing agents. They can't understand why more people don't come out and report more bugs.</p>
</div>
<div class="section" id="moral-of-the-story">
<h2>Moral of the Story</h2>
<p>The security flaw reporting game is completely broken.
Empirically, we can see that we have an undeniable problem.
People do not come forward, and if we think critically, there is absolutely
no reason for them to come forward. There have been so many bad actors
that the few people who step up ought to be sainted.</p>
<p>The root cause of the problem is that almost every project I know, some worth
many many billions, is being run as if it's Billy Bob's Software Shack and
Associated Slack Channel of Speculators and Online Mob.</p>
<p>The bounty programs are identical in nature and amount in
cryptocurrencies and all other software. This is insane.
One is meant to build something like
a banking system that holds billions, the other runs on a machine in someone's den that holds Aunt Beth's pictures.</p>
<p>Many large projects do not even have a Chief Security Officer, whose job
is to assess the severity of the flaws independently from the development team.
That decoupling is essential, not only because it brings a different pair of
eyes to the problem, but decouples the egos from the software artifact and
makes the problem scenarios less likely to play out.</p>
<p>Personally, I will soon actively divest from coins that lack a dedicated
person in charge of security. I suggest that you all do the same. The current
situation is laughable, and the scrutiny it brings to cryptocurrencies is bad
for all.</p>
<p>And there are other things we can do collectively:
(1) Actively avoid projects that demand exploits before acknowledging problems, or otherwise create friction,
(2) Do not fan the flames of online drama involving security flaws, everyone has errors, let people acknowledge and patch their code in peace,
and
(3) come up with better incentives and payout schedules for bounties, and at least use a sliding scale based on severity.</p>
<p>Time to start demanding better practices from multi-billion dollar
software projects, and we need to start behaving better as a crowd.</p>
<p>[The End.]</p>
<table class="docutils footnote" frame="void" id="id2" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id1">[1]</a></td><td>;-)</td></tr>
</tbody>
</table>
</div>
https://hackingdistributed.com/2018/05/30/choose-your-own-security-disclosure-adventure/