What are the main technical hurdles to implementing UTXO commitments?

8

Commits of the UTXO set in the block header would enable more secure lightweight clients and caps on the number of blocks that need to be downloaded and validated in the Initial Blockchain Download, which is vitally important for Bitcoin's longevity and scalability, but I have been told UTXO commits are a very technically challenging change in Bitcoin's design.

What are main technical challenges in implementation of UTXO commitments, and what are the proposed solutions for them?

Amin

Posted 2015-09-05T00:39:19.277

Reputation: 1 452

How is this different than checkpoints? CS isn't my forté – Wizard Of Ozzie – 2015-09-06T15:04:35.990

1A UTXO commitment is validated by all of the proof of work that was built on top of it. – Amin – 2015-09-08T22:29:12.070

Answers

4

The main bottleneck of committing to a UTXO merkle root is that it's I/O and CPU heavy to create and verify. As of today the serialized UTXO set is around 1GB and it contains almost 34000000 entries and it keeps growing. This means that a naive implementation would have to hash at least that amount of data per block plus the intermediate nodes to construct a merkle tree. This is O(n) complexity and it is generally considered poor performance. It is possible to use caching optimizations but that would use more RAM.

One possible alternative are UTXO differences that each block introduces. It works by committing a merkle root with all the UTXOs that were spent in the current block and another root that lists all the added UTXOs. For an SPV client this could work as a fraud proof, where it is possible to show that a specific output was spent on a specific height (although it is already possible to show it using ordinary transactions).

So to sum up, a full UTXO merkle root commitments are really useful but cannot be used due to poor scaling.

John L. Jegutanis

Posted 2015-09-05T00:39:19.277

Reputation: 601

How long does it take to calculate a new UTXO merkle root? – Amin – 2016-02-02T17:31:06.140

Not sure but you need to hash about 2x34 million entries. You can try the gettxoutsetinfo rpc command to get a very rough idea (the hash_serialized is the hash of the data, not a UTXO merkle tree). – John L. Jegutanis – 2016-02-02T18:43:22.783