Tail Removal Block Validation

Andrew Stone
15 min readOct 30, 2017

(and why it is necessary) Oct 30, 2017

Update Nov 1, 2019: Two years after this post, the prediction that miners will take advantage of the Bitcoin Cash DAA algorithm (named cw-144 in this post) is very likely correct — a common topic of discussion is recent oscillations in difficulty and block discovery. One post discovers an 8% advantage in revenue, which is in alignment with this research.

Additionally, the first algorithm that implements the “tail removal” concept first proposed in this post has been released here.

I was asked to take a closer look at the proposed difficulty algorithms for Bitcoin Cash due to repeated difficulty spikes caused by a resonant oscillation in the algorithm currently proposed to be included in the ABC client (cw-144). Tom Harding discovered this oscillation and you can see the spikes in Tom’s original graph in yellow below. To really see the problem I also plotted the number of blocks that took one hour or longer to mine over 4 possible difficulty adjustment algorithms (cw-144 is green, click on any chart to see it full size):

cw-144 difficulty spikes and resulting extended block discovery times (fxramp scenario)

The graphs above and all subsequent graphs in this document were generated with Neil Booth’s excellent mining simulator available here, and with my modifications and SageMath integration available here. For the rest of this document the following colors correspond to the following algorithms:

  • cw-144: green
  • k-1: blue
  • wt-144: purple
  • piec: red

When multiple coins are mined with the same hardware, it makes sense for miners to mine the most profitable coin. I decided to simulate this by creating a “square wave” simulation where mining moves from a baseline to 10 times that amount (2 to 10 times the baseline achieve similar results). I believe that an abrupt increase of 10 times is realistic because the following graph that shows BCC hash rates over the past several months contains rapid hash power decreases of up to 30 times (Oct 5 and others) and increases of over 20 times (Oct 12 and others):

credit fork.lol

Unfortunately, the best performing algorithm in the “ramp” (K-1, blue) has worrisome difficulty spikes in the “square wave” simulation, and a slower reaction time when hash power drops.

cw-144, k-1, wt-144 and piec difficulty changes by time

But regardless, these changes in hash power resulted in long block discovery times for all algorithms.

We can plot block discovery times by time to see the problem:

It should surprise no one that when hash power drops rapidly, block discovery times rise. But hours is a very long wait.

Manipulating Difficulty for Profit

Miners have a monetary incentive to destabilize blockchain difficulty to increase total profit. Let us imagine that 2 blockchains are being stably mined at equal profitability. In this situation miners can switch chains without changing their profits. Can a miner increase profits by switching chains to take advantage of random difficulty changes or even to induce difficulty swings?

I simulated the simplest scenario where a miner switches his hash power from one chain to the other every N blocks. I made no attempt to amplify natural mining variation or to attack specific features in the difficulty algorithms, two strategies which may yield even greater rewards.

For cw-144, a wide band of resonance develops around a 200 block wavelength.

blue blocks are mined at low hash rates, purple at high rates

If the greedy miner has half the hash power as the stable miners, the simulation projects an average “greedy period” block time of 440 seconds and a “normal” time of 770 seconds. This corresponds to a 16% increase in blocks per petahash and therefore a proportional increase in revenue as compared to the steady miners. Also note that the greedy miner’s hash power variability could also have a similar effect on the other blockchains he is mining, depending on that blockchain’s difficulty algorithm and hashing power.

It is difficult to imagine that stable miners will continue to mine benevolently with such a large income disparity — they will either switch to greedy mode or have a reduced hashing impact as the greedy miners plow their additional profits back into the purchase of more mining hardware. The economics of this situation drives miners to become greedy.

So let us look at the case where greedy miners have 10 times the hash power as the stable miners — note that this still 3 times less than the greedy miners we saw operating on the Bitcoin Cash blockchain on Oct 5.

In that scenario, the simulation indicates that it will be possible for the greedy miners to achieve a 25% increase in revenue per petahash compared to steady mining. But the worst news is for users of the blockchain. The greedy miners manage to mine blocks in 80 seconds on average, and force the stable miners to mine 1150 second average blocks. In other words, the greedy miners are able to capture half the newly minted coins mining the chain 6.7% of the time, resulting in erratic block confirmation times.

cw-144 10x hash 200 block wavelength

K-1 performs better, offering only a 5% increase using a 40 block wavelength. However, based on K-1’s difficulty graphs (it seems to overshoot on upwards ramps), I think that an algorithm tailored to K-1’s behavior could do much better.

wt-144 and piec beat this periodic miner. The trick they appear to use is to allow difficulty to fall faster than it is allowed to rise. This allows the steady miner to grab a lot of low difficulty blocks before the periodic miner starts back up. It would be interesting to see how they fare against a periodic-in-time (rather than blocks) miner though.

piec 10x wavelength 400
wt-144 10x wavelength 200

Instead, to take advantage of wt-144, I wrote an algorithm that assigns the miner’s hash power to the blockchain if the difficulty target is greater than the average target over the last N blocks. So this miner algorithm effectively takes advantage of wt-144’s over-reaction to random variation in block discovery times. With N=100, I was able to achieve a 6.1% advantage over the steady miner. “Piec” was similar at 5%.

wt-144 running a low difficulty mining strategy with half the steady hashpower

Given that a day of simulation produced exploits that resulted in a 5 to 25% advantage over the steady miner, we should assume that problems with these algorithms will be exploited since doing so may be very profitable. These exploits all pay miners who move hash power at the expense of the steady “benevolent” miner. This is unfortunate as it rewards miners who make block confirmation (and therefore transaction) more erratic at the expense of those who are making it less so.

Blockchain Difficulty Changes is a Control Theory Problem

The reality is that choosing a blockchain difficulty algorithm, as currently defined, is a provably impossible problem to solve unless some miners mine regardless of profitability. But these “benevolent” miners (the simulations assume 300PH/s), are giving profits away to greedy miners. And even with benevolent miners, the problem remains incredibly hard because these miners are only able to provide rare (1 data point per block) noisy (block discovery has large variation) feedback.

People versed in control theory will recognize this as a control theory problem. Control theory problems have several elements. There is a desired value (set point), a knob (something that can be controlled), and feedback. To understand this let’s look at a classic control theory example — automotive cruise control — and provide an analogy to our problem. In essence, the controlling program must choose how much gas to give the car (set the blockchain difficulty) to maintain a desired speed (10 minute block interval). It receives the car’s speed as feedback (block interval times), and it must react to external changes like hills and valleys (mining hash rate changes).

All problems that fit this design pattern are control theory problems. The standard and best tool for such a job is the PID (proportional, integral, derivative) controller.

Unfortunately, the feedback data we receive is extremely noisy because miners can get very lucky or very unlucky. Since the “derivative” term of the PID controller amplifies noise, it is unusable without a filter. However, a filter operates better on larger datasets. As shown above, with an abrupt drop in hash power, we may have just one or two data points during hours of operation. One cannot meaningfully run a filter over 2 data points. So no “D” then.

However, our “noisy” signal is quite interesting. Its not quite all noise. If a block is mined at time N, we know the exact likelihood that that block was “lucky” or unlucky. Intuitively, if a block is found after 300 seconds, we know that this short time is much more likely to be noise than if a block comes in after 3000 seconds. We can use this knowledge to weight our input data. This is the graph of block mining likelihood from 1 to 3000 seconds:

I used this knowledge to “weight” or “correct” the feedback data and came up with the “piec” (proportional, integral error correcting) algorithm shown in red in the above charts. Unfortunately even with this additional sophistication, it suffers from the same issues as the other algorithms.

PID algorithms have a lot of theory behind them that show that they approach optimal behavior. Given the noisy and rarely delivered feedback there are severe limits to what any controller can accomplish.

When confronted with a “very hard to impossible” problem, the best choice is often to redefine the problem rather than search for a marginal solution.

A new block validation algorithm: Tail Removal

In English, I’m proposing that miners mine for 20 minutes at the network difficulty, and then we ramp the difficulty down from its current value to 1 across the next 20 minutes. Nodes choose the active chain as the one with the most work including any difficulty reductions.

More formally, I propose that a block be valid if:

  1. The block’s timestamp is less than or equal to the current time, and greater than the prior block. Currently, a block is valid if it is up to two hours into the future. So this requirement proposes that we reduce that requirement to be the current time. This is not strictly necessary, but it will keep the time reported in the blockchain honest)
  2. The POW exceeds the block difficulty, and the block difficulty matches the current network difficulty (today’s algorithm) OR
  3. The current block’s timestamp — prior block’s timestamp > 1200 (20 minutes) and the POW exceeds the “effective difficulty”.

The effective difficulty is calculated by:

deltaT = block’s timestamp — prior block’s timestamp

decline = difficulty/1200

effective difficulty = max(1, difficulty — (deltaT — 1200)*decline)

“Effective Work” is of course directly calculable from the effective difficulty.

4. At a particular time T, the active chain is the chain with the most effective work for all blocks whose timestamp is ≤ T.

Analysis

This new algorithm cuts off the “long tail” of block discovery, almost guaranteeing that a block will be discovered within 40 minutes (given at least one $20 home ASIC miner is still running).

Irrespective of the difficulty adjustment problem, I believe that this feature is very valuable for a use-oriented blockchain like Bitcoin Cash because it will offer a reliable worst-case confirmation time. At the same time I believe that this feature is necessary to solve the problem of reliable difficulty adjustments given wide variations in hash power.

Note that this idea is a refinement of what already exists on “testnet”. On testnet, network difficulty drops to 1 after 20 minutes. One problem with a sudden drop is that many orphans are created as every miner with ASICs solves a block at the 20 minute mark. By using a linear ramp, this issue is mitigated. Additionally the abrupt “cliff” in testnet encourages miners to lie about the block time, and the 2 hour validity window means that the “honest majority” accepts the lie. Since the validity window is greater than the 20 minute difficulty drop, it is possible for miners to mine 2 hours worth of 20 minute blocks quickly by lying about the time in every block. By removing the 2 hour validity window, this testnet exploit is eliminated.

Eliminating Cheating

So by reducing the block time linearly and eliminating the “validity time window”, the incentive to cheat is reduced, and the ability to do so given a majority of honest miners is removed. Yes, a miner may choose to optimize his block production by “future-mining” but then he must hold off publishing an early discovery until the block’s timestamp is passed.

Any earlier higher effective-difficulty block will beat his future-mined block, and a higher-difficulty chain will beat his chain. So there is a strong incentive to produce blocks near the current time so they can be published right away, allowing honest miners to switch to mining a child of your block not a sibling. A “slightly-dishonest” miner might even choose to mine on top of a future block (if it is relayed to him). However, this is a risky strategy for the same reasons given in the “majority mining section” below. In other words, “future-mining” is not a bug, it is a valid miner strategy of limited use.

And competing miners have reason to NOT accept future blocks — doing so awards a different miner prematurely and ultimately will result in a higher difficulty. Note that if a minority of miners forked a future-timestamped blockchain, it would have less work than the competing honest chain even though it may be longer.

Majority Mining

In theory a majority of miners could conspire to cheat on the timestamp to increase inflation. However, this behavior would be obvious to all participants, causing a crisis of confidence in the currency. And economic nodes will not accept his block and therefore his new coins until the block’s timestamp has passed, making it impossible to “cash out” before the crisis.

Also note that a majority miner would have to mine the future very carefully to ensure that at his chain has more work than a future competing fork, at all future block times. If this is not the case, the network will follow the competing most-work fork at that point.

So the future-mining miner is effectively betting that his hash power right now is greater than the hash power that will be deployed at any point in his future chain. This is a very dangerous bet.

Ultimately we cannot prove that a mining majority won’t attempt to attack the chain for the “tail removal” algorithm or for any other algorithm (some people see emergent consensus, Bitcoin Cash, and segwit2x as attacks on the blockchain). But we can apply the same reasoning Satoshi proposed in the Bitcoin white paper when discussing double spending:

“He ought to find it more profitable to play by the rules, such rules that favour him with more new coins than everyone else combined, than to undermine the system and the validity of his own wealth.”

1 Second Past Mining

This mining approach was suggested by Tom Harding in his review of this document. Basically a miner always sets the block timestamp to be 1 second greater than the current block (regardless of the actual time) so that every third block on average will be a difficulty reduced block, unless it is possible to mine difficulty-reduced blocks.

If this strategy is employed, the difficulty will adjust to account for it, yielding no ultimate net benefit to miners. However, the timestamp of the 1 second blocks will not be accurate which is a minor negative.

I am still thinking about solutions, but clearly this strategy can be combated by offering a discount starting at time 0, rather than having the 0 to 20 minute interval be a level line. Perhaps a linear reduction to 3/4 difficulty in 20 minutes, but a more sophisticated approach would use a continuous function like N-log(x) across the entire interval.

More analysis is also needed in the case where 1 second mining is combined with block withholding. If a miner finds the second 1 second block quickly, it may make sense to withhold that 1 second block while searching for the reduced difficulty block. However, note that all miners have the same ability to do so, and in the worst case this only give the miner 2 blocks in a row — it cannot be used to give one miner a continual advantage.

Effect on Difficulty Algorithms and Block Times

The “Tail Removal” block validity algorithm can easily be applied to the simulation without modifying the existing proposed difficulty algorithms. In difficulty assignment, these algorithms all behave similarly to their behaviour without tail removal:

cw-144, k-1, wt-144 and piec difficulty changes by time with tail removal

Why do these algorithms work even though the block distribution no longer follows the standard poisson distribution? The answer is that the algorithms do not depend on a particular block distribution, they depend of the feedback given by block times. With all else held constant, the “tail removal” algorithm shifts the average block time to be less than 10 minutes, and so the control algorithms react by raising the difficulty.

However, the block discovery performance is significantly better. Previously I included bar graphs of greater than 1 hour blocks for all the algorithms, and of longest block time. But I cannot show the 1 hour graph when using “Tail Removal” because there are no such blocks! And the longest block time is a beautiful 40 minutes for all algorithms.

Instead, here is a view of the block times of every block. Without tail removal block discovery times range from 1 to 25000 seconds, and the dramatically long block times correspond exactly with the hash power drop in the graph above (at time 2.5e6 seconds):

Block time without tail removal

With tail removal, the longest block discovery time is 2400 seconds (note the different Y scale in this graph):

block discovery time with tail removal

However, most of the blocks are unaffected since they are discovered within 20 minutes. The major visible effect is exactly when hash power falls dramatically at time 2.5e6 seconds — exactly when we want it.

Code Changes

Code changes are minimal, which the great advantages of this approach over more interesting approaches. Since the idea of reducing difficulty already exists in the code as “testnet”, the code is well exercised with the concept of a changing difficulty and unexpected dependencies are unlikely to exist.

  • Changes must be made to the block validation routine to follow the “tail removal” algorithm
  • Chain difficulty should be calculated using the effective difficulty
  • Block headers with a future timestamp should not be downloaded, or downloaded and held until the timestamp is now.

Finale

Even with “benevolent mining”, control algorithms will not be able to choose block times without having extended gaps in block discovery due to the fact that the hash power feedback data is noisy, rare, and becomes even more rare when most needed.

And it is dangerous to rely on “benevolent mining”, since these miners may choose at any time a more lucrative mining strategy. I believe that if Bitcoin Cash chooses to rely on “benevolent mining”, this may have a chilling effect on economic activity moving to the chain. A company is less likely to plan a strategy around a blockchain that may suddenly lose its transaction commitment capacity, and people are less likely to use it for savings.

So it is important to create a block discovery/ transaction commitment process that is at least as consistent as Bitcoin. But since Bitcoin Cash is meant to be a daily-use peer-2-peer currency, having more consistent transaction confirmation than Bitcoin would enhance its value. This is why in Bitcoin Cash we did not enable Replace-By-Fee and have committed to scale the blockchain to meet demand or at least to physical network limits.

Deploying “Tail Removal” with a 40 minute worst-case block discovery time would enhance the value of Bitcoin Cash and solve the mining control problem.

Changes to make this happen are not large, but even if we cannot make a mid-November date (the date tentatively scheduled for a hard fork), it is better to solve the problem properly than to hard fork to a control algorithm with similar, albeit lower magnitude, problems as the current EDA.

--

--