6

From a high-level point of view, given a quantum program, typically the last few operations are measurements.

In most cases, in order to extract a useful answer, it is necessary to run multiple times to reach a point where it is possible to estimate the probability distribution of the output q-bits with some level of statistical significance.

Even when ignoring noise, if the number of q-bits of the output increases, the number of required runs will increase too. In particular, for some output probability distributions it is likely that the number of required samples can be very high to obtain some sensible statistical power.

While increasing the number of q-bits seems to indicate an exponential growth, the number of measurements required (each run taking x nanoseconds) seem to limit how will quantum computation do scale.

Is this argument sensible or there is some critical concept that I am missing?

I guess this must be linked somehow to the Cramer-Rao bound. Do you know any "easy"" reference/paper that I could go through? – Juan Leni – 2018-09-08T10:02:36.800