Is error correction necessary?



Why do you need error correction? My understanding is that error correction removes errors from noise, but noise should average itself out. To make clear what I'm asking, why can't you, instead of involving error correction, simply run the operations, say, a hundred times, and pick the average/most common answer?


Posted 2018-03-14T01:32:52.570

Reputation: 3 075



That doesn't scale well. After a moderately long calculation you're basically left with the maximally mixed state or whatever fixed point your noise has. To scale to arbitrary long calculations you need to correct errors before they become too big.

Here's some short calculation for the intuition given above. Consider the simple white noise model (depolarizing noise), $$\rho'(\varepsilon)= (1-\varepsilon)\rho + \varepsilon \frac{\mathbb{I}}{\operatorname{tr} \mathbb{I}},$$ where $\rho$ is the ideal state (standard notation applies). If you concatenate $n$ such noisy processes, the new noise parameter is $\varepsilon'=1-(1-\varepsilon)^n$, which increases exponentially in the number of gates (or other error sources). If you repeat the experiment $m$-times and assume that the standard error scales as $\frac{1}{\sqrt{m}}$ you see that the number of runs $m$ would be exponentially in the length of your calculation!

M. Stern

Posted 2018-03-14T01:32:52.570

Reputation: 1 874


If the error rate were low enough, you could run a computation a hundred times and take the most common answer. For instance, this would work if the error rate were low enough that the expected number of errors per computation was something very small — which means that how well this strategy works would depend on how long and complicated a computation you would like to do.

Once the error rate or the length of your computation become sufficiently high, you can no longer have any confidence that the most likely outcome is that there were zero errors: at a certain point it becomes more likely that you have one, or two, or more errors, than that you have zero. In this case, there is nothing to prevent the majority of the cases from giving you an incorrect answer. What then?

These issues are not special to quantum computation: they also apply to classical computation — it just happens that almost all of our technology is at a sufficiently advanced state of maturity that these issues do not concern us in practise; that there may be a greater chance of your computer being struck by a meteorite mid-computation (or it running out of battery power, or you deciding to switch it off) than of there being a hardware error. What is (temporarily) special about quantum computation is that the technology is not yet mature enough for us to be so relaxed about the possibility of error.

In those times when classical computation has been at a stage when error correction was both practical and necessary, we were able to make use of certain mathematical techniques — error correction — which made it possible to suppress the effective error rate, and in principle to make it as low as we liked. The same techniques surprisingly can be used for quantum error correction — with a little bit of extension, to accommodate the difference between quantum and classical information. At first, before the mid-1990s, it was thought that quantum error correction was impossible because of the continuity of the space of quantum states. But as it turns out, by applying classical error correction techniques in the right way to the different ways a qubit could be measured (usually described as "bit" and "phase"), you can in principle suppress many kinds of noise on quantum systems as well. These techniques are not special to qubits, either: the same idea can be used for quantum systems of any finite dimension (though for models such as adiabatic computation, it may then get in the way of actually performing the computation you wish to perform).

At the time I'm writing this, individual qubits are so difficult to build and to marshall that people are hoping to get away with doing proof-of-principle computations without any error correction at all. That's fine, but it will limit how long their computations can be until the number of accumulated errors is large enough that the computation stops being meaningful. There are two solutions: to get better at suppressing noise, or to apply error correction. Both are good ideas, but it is possible that error correction is easier to perform in the medium- and long-term than suppressing sources of noise.

Niel de Beaudrap

Posted 2018-03-14T01:32:52.570

Reputation: 9 858

As a quick correction, modern hardware does suffer from non-negligible error rates, and error-correction methods are used. That said, of course your point about the problems being much worse on current quantum computers holds. – Nat – 2018-03-14T21:12:19.490

@Nat: interesting. I'm vaguely aware that this may currently be the case for GPUs, and (in a context not involving active computation) RAID arrays are an obvious example as well. But could you describe other hardware platforms for which classical computation must rely on error correction during a computation? – Niel de Beaudrap – 2018-03-14T23:52:25.043

Seems like errors are most frequently in networking contexts, followed by disk storage, followed by RAM. Networking protocols and disks routinely implement error-correction tricks. RAM's a mixed bag; server/workstation RAM tends to use error-correcting code (ECC), though consumer RAM often doesn't. Within CPU's, I'd imagine that they have more implementation-specific tactics, but those'd likely be manufacturer secrets. Error-rates in CPU's and GPU's become relevant at an observable level in a few cases, e.g. in overclocking and manufacturer core-locking decisions. – Nat – 2018-03-15T00:02:02.183

Actually kinda curious about CPU-type error-correction now.. I mean, the cache would seem prone to the same issues that normal RAM is (unless somehow buffered with more power or something?), which'd presumably be unacceptable in server/workstation contexts. But at the register-level? That'd be something neat to read about; didn't see anything immediately on Google, though I suppose that such info'd likely be a trade secret. – Nat – 2018-03-15T00:08:44.610


Now, adding to M. Stern's answer:

The primary reason as to why error correction is needed for quantum computers, is because qubits have a continuum of states (I'm considering qubit-based quantum computers only, at the moment, for sake of simplicity).

In quantum computers, unlike classical computers each bit doesn't exist in only two possible states. For instance a likely source of error is over-rotation: $\alpha|0\rangle+\beta|1\rangle$ might be supposed to become $\alpha|0\rangle + \beta e^{i\phi}|1\rangle$ but actually becomes $\alpha|0\rangle+\beta e^{i(\phi+\delta)}|1\rangle$. The actual state is close to the correct state but still wrong. If we don't do something about this the small errors will build up over the course of time and eventually become a big error.

Moreover, quantum states are very delicate, and any interaction with the environment can cause decoherence and collapse of a state like $\alpha|0\rangle+\beta|1\rangle$ to $|0\rangle$ with probability $|\alpha|^2$ or $|1\rangle$ with probability $|\beta|^2$.

In a classical computer if say a bit's value is being replicated n-times as follows:

$$0 \to 00000...\text{n times}$$ and $$1 \to 11111...\text{n times}$$

In case after the step something like $0001000100$ is produced it can be corrected by the classical computer to give $0000000000$ because majority of the bits were $0's$ and most probably the intended aim of the initial operation was replicating the $0$-bit $10$ times.

But, for qubits such a error correction method won't work, because first of all duplicating qubits directly is not possible due to the No-Cloning theorem. And secondly, even if you could replicate $|\psi\rangle = \alpha |0\rangle +\beta |1\rangle$ 10-times it's highly probably that you'd end up with something like $(\alpha|0\rangle + \beta |1\rangle)\otimes (\alpha e^{i\epsilon}|0\rangle + \beta e^{i\epsilon'}|1\rangle)\otimes (\alpha e^{i\epsilon_2}|0\rangle + \beta e^{i\epsilon_2'}|1\rangle)\otimes ...$ i.e. with errors in the phases, where all the qubits would be in different states (due to the errors). That is, the situation is no-longer binary. A quantum computer, unlike a classical computer can no longer say that: "Since majority of the bits are in $0$-state let me convert the rest to $0$ !", to correct any error which occurred during the operation. That's because all the $10$ states of the $10$ different qubits might be different from each other, after the so-called "replication" operation. The number of such possible errors will keep increasing rapidly as more and more operations are performed on a system of qubits. M. Stern has indeed used the right terminology in their answer to your question i.e. "that doesn't scale well".

So, you need a different breed of error correcting techniques to deal with errors occurring during the operation of a quantum computer, which can deal not only with bit flip errors but also phase shift errors. Also, it has to be resistant against unintentional decoherence. One thing to keep in mind is that most quantum gates will not be "perfect", even though with right number of "universal quantum gates" you can get arbitrarily close to building any quantum gate which performs (in theory) an unitary transformation.

Niel de Beaudrap mentions that there are clever ways to apply classical error correction techniques in ways such that they can correct many of the errors which occur during quantum operations, which is indeed correct, and is exactly what current day quantum error correcting codes do. I'd like to add the following from Wikipedia, as it might give some clarity about how quantum error correcting codes deal with the problem described above:

Classical error correcting codes use a syndrome measurement to diagnose which error corrupts an encoded state. We then reverse an error by applying a corrective operation based on the syndrome. Quantum error correction also employs syndrome measurements. We perform a multi-qubit measurement that does not disturb the quantum information in the encoded state but retrieves information about the error. A syndrome measurement can determine whether a qubit has been corrupted, and if so, which one. What is more, the outcome of this operation (the syndrome) tells us not only which physical qubit was affected, but also, in which of several possible ways it was affected. The latter is counter-intuitive at first sight: Since noise is arbitrary, how can the effect of noise be one of only few distinct possibilities? In most codes, the effect is either a bit flip, or a sign (of the phase) flip, or both (corresponding to the Pauli matrices X, Z, and Y). The reason is that the measurement of the syndrome has the projective effect of a quantum measurement. So even if the error due to the noise was arbitrary, it can be expressed as a superposition of basis operations—the error basis (which is here given by the Pauli matrices and the identity). The syndrome measurement "forces" the qubit to "decide" for a certain specific "Pauli error" to "have happened", and the syndrome tells us which, so that we can let the same Pauli operator act again on the corrupted qubit to revert the effect of the error.

The syndrome measurement tells us as much as possible about the error that has happened, but nothing at all about the value that is stored in the logical qubit—as otherwise the measurement would destroy any quantum superposition of this logical qubit with other qubits in the quantum computer.

Note: I haven't given any example of actual quantum error correcting techniques. There are plenty of good textbooks out there which discuss this topic. However, I hope this answer will give the readers a basic idea of why we need error correcting codes in quantum computation.

Recommended Further Readings:

Recommended Video Lecture:

Mini Crash Course: Quantum Error Correction by Ben Reichardt, University of Southern California

Sanchayan Dutta

Posted 2018-03-14T01:32:52.570

Reputation: 14 463

3I'm not sure the fact that there is a continuum of states plays any role. Classical computation with bits would also have the same problems if our technology were less mature, and indeed it did suffer meaningfully from noise at various times in its development. In both the classical and quantum case, noise doesn't conveniently average away under normal circumstances – Niel de Beaudrap – 2018-03-14T09:49:26.307

@NieldeBeaudrap It does play a big role. In classical computation you know that you'd have to deal only with two states, beforehand. Just consider an example: In classical computation if a signal of $5$ mV represents "high" (or $1$-state) while $0$ mV theoretically represents the "low" (or $0$-state), if your operation ended up with something like $0.5$ mV it would be automatically be rounded off to $0$ mV because it is much closer to $0$ mV than $5$ mV. But in case of qubits there are an infinite number of possible states and such rounding off doesn't work. – Sanchayan Dutta – 2018-03-14T09:55:27.327

Of course you're not wrong when you say that even classical computation suffered from the problem of noise. There's a well established theory of classical error correcting codes too! However, the situation is much more dire in case of quantum computation due to the possibility of infinite number of states of existence of a single qubit. – Sanchayan Dutta – 2018-03-14T09:58:39.683

1The techniques used for quantum error correction does not involve the fact that the state-space is infinite in any way. The arguments you are making seem to be drawing an analogy between quantum computing and analog computing --- while there is a similarity, it would imply that quantum error correction would be impossible if it were a sound analogy. In contrast, the state-space of many qubits is also like a probability distribution on bit-strings, of which there is also a continuum; and yet just doing error correction on definite bit-strings suffices to suppress error. – Niel de Beaudrap – 2018-03-14T10:08:16.010

1@glS I have removed the first sentence. You're right. I was interpreting computation in an unrelated way. – Sanchayan Dutta – 2018-03-15T12:53:32.850


noise should average itself out.

Noise doesn't perfectly average itself out. That's the Gambler's Fallacy. Even though noise tends to meander back and forth, it still accumulates over time.

For example, if you generate N fair coin flips and sum them up, the expected magnitude of the difference from exactly $N/2$ heads grows like $O(\sqrt N)$. That's quadratically better than the $O(N)$ you expect from a biased coin, but certainly not 0.

Even worse, in the context of a computation over many qubits the noise doesn't cancel itself nearly as well, because the noise is no longer along a single dimension. In a quantum computer with $Q$ qubits and single-qubit noise, there are $2Q$ dimensions at any given time for the noise to act on (one for each X/Z axis of each qubit). And as you compute with the qubits, these dimensions change to correspond to different subspaces of a $2^Q$ dimensional space. This makes it unlikely for later noise to undo earlier noise, and as a result you're back to $O(N)$ accumulation of noise.

run the operations, say, a hundred times, and pick the average/most common answer?

As computations get larger and longer, the chance of seeing no noise or of the noise perfectly cancelling out rapidly becomes so close to 0% that you can't expect see the correct answer even once even if you repeated the computation a trillion times.

Craig Gidney

Posted 2018-03-14T01:32:52.570

Reputation: 11 207


Why do you need error correction? My understanding is that error correction removes errors from noise, but noise should average itself out.

If you built a house or a road and noise was a variance, a difference, with respect to straightness, to direction, it's not solely / simply: "How would it look", but "How would it be?" - a superposition of both efficiency and correctness.

If two people calculated the circumference of a golf ball given a diameter each would get a similar answer, subject to the accuracy of their calculations; if each used several places of decimal it would be 'good enough'.

If two people were provided with identical equipment and ingredients, and given the same recipe for a cake, should we expect identical results?

To make clear what I'm asking, why can't you, instead of involving error correction, simply run the operations, say, a hundred times, and pick the average/most common answer?

You're spoiling the weighing, tapping your finger on the scale.

If you're at a loud concert and try to communicate with the person next to you do they understand you the first time, everytime?

If you tell a story or spread a rumor, (and some people communicate verbatim, some add their own spin, and others forget parts), when it gets back to you does it average itself out and become essentially (but not identically) the same thing you said? - unlikely.

It like crinkling up a piece of paper and then flattening it out.

All those analogies were intended to offer simplicity over exactness, you can reread them a few times, average it out, and have the exact answer, or not. ;)

A more technical explanation of why quantum error correction is difficult but neccessary is explained on Wikipedia's webpage: "Quantum Error Correction":

"Quantum error correction (QEC) is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is essential if one is to achieve fault-tolerant quantum computation that can deal not only with noise on stored quantum information, but also with faulty quantum gates, faulty quantum preparation, and faulty measurements.".

"Classical error correction employs redundancy. " ...

"Copying quantum information is not possible due to the no-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible to spread the information of one qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. A quantum error correcting code protects quantum information against errors of a limited form.".


Posted 2018-03-14T01:32:52.570

Reputation: 2 100