The Median EB “Attack” Against BU Explained

Opponents of Emergent Consensus have proposed an “attack” against the Bitcoin network that I would like to address. The attack is as follows:

Let’s say 45% sets EB to 2MB, 30% to 4MB, and 25% to 8MB. Now a malicious miner with 3% of the hash power (was signaling 8MB) mines a 3MB block. Assuming default AD 45% of the network fails to stop the block from inclusion and orphans 4 blocks. Now their “sticky gates” are hit and therefore completely open for 144 blocks… Now the miner mines a 6MB block, which is immediately accepted by the 25% signaling for 8, as well as the 45% who no longer have a vote since their gates were hit. After 4 blocks it is integrated again… Now again the attacker has created another opportunity as a full 75% of the network hash rate will have their gates wide open. Now this attacker mines a [32MB block] full of the most computationally intensive transactions and hashes he can manage. Let’s be nice and say it takes a [couple of hours] to validate and download the block. The entire network grinds to a halt as it is automatically accepted by a 75% majority thanks to a single miner with just enough hash power to reasonably expect 2 or 3+ blocks out of every 144. I want to hear a solid technical argument as to why this wont happen.

source

Is this an “Attack”?

Where did the extra 35MB of transactions come from? There are two possibilities. In “Scenario A” the transactions are real and already exist in the network. In “Scenario B”, the transactions are unpropagated and created by the malicious miner for the sole purpose of creating a large block (in the rest of this article, I’ll refer to these transactions and blocks as “spam”, since they really are the only transactions that can avoid paying market rate).

The 30MB block is the most interesting so I’ll focus on that.

In scenario A, transactions have already propagated so the block is transferred via Xthin technology with a 24 times average compression ratio. So 1.3MB of data is actually transferred. This is just a bit above the block size today. Transactions are already validated in the mempool so the block is quickly validated, irrespective of how complex the transactions are.

In scenario B, transactions have not propagated. This dramatically increases the likelihood of this block being orphaned, and increases the chance of single (coinbase) transaction blocks being created. This occurs because miners cannot build meaningful blocks on top of a block that has not been received and validated, so they either build an empty block, or an orphan. In the first case, the attacker is penalized for producing a large block. But what’s most important is that in BOTH cases, the network’s own physical limitations feed back to reduce the average block size.

The actual attack may vary smoothly between A and B based on how many transactions are pre-propagated. This will simply affect the likelihood that the block is orphaned verses being accepted.

The Effect on Miners and Full Nodes

Since there are 144 blocks per 24 hour period on average, this constitutes 583 bytes/second of network and disk traffic (35MB / (86400 sec/day * 100blocks/(144 blocks/day)). This is a tiny quantity in a world that denotes network and disk bandwidth in megabytes or gigabytes per second. Even if you are relaying the data to many other nodes, this increase is not significant. But if the increase time your relay multiplier does happen to cross your node’s bandwidth boundary, then your node will simply relay to fewer nodes. This will allow a more capable node to fill in, and network wide propagation will be a bit slower. P2P networks degrade gracefully.

But actually in scenario A, this “extra” bandwidth has already been sent to miners — the transactions sent to the Bitcoin network are only loosely correlated with block size via the mechanism of onerous fees that discourage Bitcoin use. So in scenario A the additional bandwidth is 1.3 MB (the Xthin size of a 32MB block) over a 100 block period.

In scenario B, if this additional bandwidth is a problem (its not in this exact scenario but imagine replacing this 32MB block with a 500MB one — although in the current code base a message that size will be rejected), it increases the likelihood of the block being orphaned with the effect of reducing the bandwidth and penalizing the attacker. Additionally, the attacker is relying on the nodes that were signaling for the smallest blocks to accept and relay his medium and very large sized blocks. But these nodes are the very ones that will handle these blocks very slowly, if you trust their signaling. In other words, the scenario as described makes the invalid assumption that all nodes are equally effective at handling large blocks and do so effectively instantly.

Finally let me point out that in the scenario the majority of miners were signaling for a block size increase — and they got one. Yay! But the increase was only 17% averaged over 100 blocks (or 2.34MB), which is far lower than what the majority was signaling for (4MB and 8MB). And this is the worst case. Network analysis and theoretical work that I have done shows that miners will tend to produce empty blocks after large blocks. This will further reduce the average increase.

[Why does the average matter and not the burst? As described above, Xthin technology smooths the bursts out for “real” blocks, and “spam” blocks tend to be orphaned, especially if they actually strain underlying network capacity.]

A 17% average block size increase, in a network whose majority is signaling for a > 100% block size increase is not an attack on miners and full nodes.

The User Experience

The scenario specifically suggests that the network “grinds to a halt”.

The author is correct in that the effect of this large block is that the confirmation time of transactions changes, but not how he expects. To simplify things a bit, let’s keep talking in MB and understand that 1MB of block capacity is about 2000–3000 transactions.

In scenario “A” (real transactions) if this “attack” had not happened we would have needed 16 2MB blocks to confirm all the transactions. So the average confirmation wait time is about an hour and 20 minutes, and the worst case wait time is two hours 40 minutes.

When the “attack” happened we needed a single 32 MB block, which was 1.3MB on the wire (propagated in seconds), and prevalidated. So the average confirmation wait time is 10 minutes.

In scenario “B” (spam), the block is either accepted in a reasonable time or orphaned by a real block. In either case, transaction confirmation is not affected, except in the sense that 3% of the network (the attacker) is producing spam rather than confirming real transactions.

The user experience of this attack is confirmations 10 minutes rather than 2.6 hours.

The Economic Factor

This scenario relies on a thin majority of the hashing power, 55% vs 45% to be triggered. But the minority actually has small advantage in the “random walk” because the majority will switch to the smaller block chain if it ever leads, but the reverse will not happen. There is a chance that the attacker is the only miner penalized.

If the “attack” proceeds but ultimately fails to put the 32MB block in the chain, the attacker (and “large block voters” to a lesser and possibly zero extent) are penalized. If the “attack” succeeds, the those voting for small blocks are penalized.

The network-wide effect is a strong economic pressure to converge on a single “EB” value (which is of course the point), and to use a higher level agreement — a human agreement — to choose to change it. The fear of this attack is even a good deterrent. If you are a miner and signaling larger blocks via EB, you are making the strong statement that you want larger blocks so much that you are willing to accept orphan blocks and some financial loss.

This is why EB values are currently at 1MB, and why ViaBTC has suggested that miners coordinate. However, the AD limit DOES limit your losses in the case where a majority of miners choose to mine larger blocks and you don’t. So Emergent Consensus (that is, the EB/AD values) is best seen as a mechanism of last resort, to be used when human consensus breaks down (and it has between the Core client and major portions of the Bitcoin ecosystem), and must therefore be built directly on the only known working trustless distributed consensus mechanism (Nakamoto Consensus).

I know that many of you wish that everyone could “just run a single client” with a single set of consensus rules. But this argument is eerily reminiscent of what the legacy financial industry said to Bitcoiners in 2012–13. And we responded “the cat is out of the bag, that ship has sailed. Cryptocurrencies are here whether you declare Bitcoin illegal or not. Now you have to deal with this new reality.” And the reality is that Emergent Consensus IS how miners can limit losses in a trustless multi-client network.

Finally, I need to briefly describe the economic effect of “losing” a block. A quick analysis assumes that the miner loses all the BTC from that block, and it sure feels that way at the time. However, that lost block is accounted for by the Bitcoin network through less calculated hash power. This results in a lower difficulty adjustment in 2 weeks, increasing the block discovery rate for the next two weeks. I’d love to have time to work out the math in detail, but right now I’ll theorize that the ultimate effect is that the revenue from the lost block is spread across all the miners (including the miner who lost the block) proportional to their hash power over the next 2 week period. So this effectively is a hash power weighted penalty to the miners who end up with orphaned blocks (the “losing” side of this scenario — as I described above, the losers could be either the large or small block groups depending on random chance and network capacity), and a bonus for those who correctly estimated the network capacity.

Economic factors drive miners to converge on the same EB and penalize miners who advocate for differences between the EB and actual network capacity.

The Human Factor

Looking at the entire human-computer system as a single machine is a discipline that perhaps is derided by classical computer scientists. However, such an integrated analysis is absolutely crucial for some of the most important subsystems in the world. Flying commercial airplanes (the interaction between the human and the autopilot) and ship weapons systems are two quick examples. If you find human-computer systems unpalatable, perhaps you need to rethink your stance.

Humans monitoring the network will likely defuse this “attack” before it can happen, resulting in the attacker losing money.

Conclusion

What this “attack” actually does is probe the network’s capacity to handle larger blocks, and redistribute some miner income based on the result of that experiment.

The Bitcoin Unlimited group had a BUIP (Bitcoin Unlimited Improvement Proposal) to address this problem. The solution is to make the “gate” only open as large as the largest previously seen block. But this proposal did not make quorum, probably because it is solving a non-existent problem, as I have shown in this document. Nevertheless, if a compelling argument is ever made for this or a similar attack, a simple solution exists.