7

1

Every time a new block is added on top of block chain, the miners have to restart their work because the next block has to have a proper reference to previous block.

Let's suppose that there is some nonce for each block such that there is also sufficiently small hash (smaller than target). In general, there are 2^{256} possible hashes. Let the target be *t*. The target can be also understand as a number of acceptable hashes. So there is *t* / 2^{256} probability to find a proper hash in each try i.e. to find a block.

The number of unsuccessful tries before a block follows a geometrical distribution with parameter *p*=*t*/2^{256}. The expected value of a variable following such a distribution is EX = 1/*p* = 2^{256}/*t*. So, each mining pool has to spend 2^{256}/*t* tries in average to find a block.

How is it that concurrent mining can be efficient, providing that each time some pool publishes a new block all other pools have to restart their work and thus throw out their tries on blocks that now can't be used any more?

*Note: Please be a bit detailed. I've already read explanations like: Every try has equal chance to success. But I can't get it from such short hints.*

Do you not agree that every try has an equal chance to success? Or do you not understand the consequences of that? (Because that is the canonical short answer.) – David Schwartz – 2015-01-22T09:33:04.533

I did not understand that every try has (exactly) an equal change to success. The discussion below and question How can we be sure that a new block will be found? helped me to clarify that.

– czerny – 2015-01-22T09:40:05.180