Why do bigger blocks make it more expensive to run a full node?



Does it cost more storage? For the transaction verification, to which extent bigger blocks make it more computational intensive? In terms of computational complexity, is the verification process a O(n) computation or anything else?


Posted 2016-02-16T18:38:58.283

Reputation: 151



Short answer: For big players the cost is mostly negligible.

Longer answer (independent of the current block size debate):

Hard Drive Cost: A bigger block will take up more disk space. Right now the block size is limited to 1 MB. If the block size is increased to 2 MB and assuming all blocks are always being filled to the max size then you will need about twice the hard drive space to store the same amount of 2 MB blocks as you needed for 1MB blocks.

Network Bandwidth Cost: If block size is increased, a full node has to relay a bigger size block. Sending and receiving more data will have an impact on cost as well.

Real Time Computational Cost: A full node has to hash a block and verify that the hash has n amount of leading zeros. Increasing the block size will result in hashing a larger amount of data which increases verification time.

Big O Complexity: Big O would depend on whatever the Big O is for sha256.


Posted 2016-02-16T18:38:58.283

Reputation: 56

Only the block header is hashed for the nonce puzzle, so increasing the block size can only increase the transaction merkle root computation cost. – HappyFace – 2019-12-18T15:34:07.047

1You're forgetting: signature validation (which is currently still the majority of the CPU cost) scales linearly with the number of hashes. Signature hash computation scales O(num_transactions * avg_transaction_size^2). Database lookups/updates scale O(inputs) and O(outputs), and each will get slower over time as the UTXO set grows. – Pieter Wuille – 2016-04-11T06:56:09.163