Quasi-Consensus and the Unconfirmed Transaction Chain Limit
Quasi-consensus is a term that was coined recently for a well-known problem: there exists network wide configuration values that, if inconsistent, cause undesirable network behavior (but do not cause a fork).
For example, the minimum fee required to relay transactions is quasi-consensus because “Mallory” could more easily double spend transactions by first creating a transaction below the minimum fee of some nodes, and then a double spend above that minimum fee that will propagate to all nodes that rejected the first transaction.
I have written a thorough description of that problem here for readers who are interested in more detail.
Standard transaction rules are also part of quasi-consensus for the same reason. One full node cannot deploy a new output (constraint) script without getting all other full nodes to accept the new script format into its set of “standard” transactions. This problem is partially solved by P2SH transactions but P2SH has severe limits, notably in script length.
Quasi-consensus rules are an impediment to permissionless innovation. And since it is generally accepted that permissionless innovation drives technology quicker and more successfully than permissioned innovation it would be better to have as few quasi-consensus rules as possible.
One quasi-consensus rule that is currently harming some applications is the maximum length and size of unconfirmed transaction chains. Unconfirmed transaction chains happen when users spend received funds without waiting for the receive transaction to be committed to a block. There are some applications that simply want to spend a prior input without confirmation, and have the recipient respend the respend, over many iterations. Although this seems esoteric, note that the money a wallet spends to itself if its inputs exceed what it needs to pay — its change — is unconfirmed. So making a bunch of quick transactions from the same wallet may create unconfirmed chains.
Bitcoin Unlimited has recently committed a change that removes unconfirmed transaction size and length limits from quasi-consensus for practical purposes. Our nodes now communicate these parameters to connected peers so that peers know each other’s mempool rejection policies. Peers that don’t support this communication message are assumed to use the current network-wide “quasi-consensus” value.
When a transaction that is held in a node’s mempool becomes acceptable to a peer, it is now sent to that peer. Previously, a transaction would be forwarded when it first entered the mempool, but never again. This made double spending extremely easy — the original spend would never propagate from the more accepting nodes to the rest of the network. If this change is enabled across the BU network, the transaction will be propagated throughout the BU nodes. When a block comes in that confirms enough parent transactions to make the transaction valid in other mempools, a double-spender is essentially racing the entire BU network to push his double-spend into the miner nodes that new accept the transaction. It is certainly possible that a well-funded, well-prepared attacker could deploy enough infrastructure to win this race sometimes. Today, similar attacks against 0-conf are also available to well-funded, well-prepared attackers since doublespends are not propagated by all nodes. But a business is concerned with probabilities, not absolutes. Will the profits made from an application that uses 0-conf transactions outweigh the successful cheaters (this business tradeoff is true for all 0-conf applications, not just large 0-conf chains)? I believe that this technology may enable businesses to profitably deploy applications that benefit from deep unconfirmed chains on the BCH mainnet today.
A business could deploy a few strategically placed BU nodes, within the same data centers that also host mining pool nodes. To enable this feature, an operator would configure both new unconfirmed limits (these config fields have existed for a long time) and turn on the new intelligent transaction forwarding:
limitancestorsize=<KB of RAM>
limitdescendantsize=<KB of RAM>
limitancestorcount=<number of allowed ancestors>
limitdescendantcount=<number of allowed descendants>