How to limit the error probability in large scale quantum computers

3

I am quite stumped by the fact that the roadmaps for quantum computers as given by IBM, Google, IonQ, etc. seem to imply a linear/exponential growth in the size of their quantum computers.

Naively, I would assume that bigger systems are much harder to engineer, especially if one wanted to be able to entangle all of their bits, because only the undisturbed outcome is correct. Given a fidelity where $P[0_i] = x$ means no error occurs on bit $i$ with probability $x$, it would seem to me that an entangled system of size $n$ would have probability $P[0_1, ..., 0_n] = x^n$. The probability of error grows exponentially, if $x < 1.0$, which is of course true for all practical systems.

ECCs mitigate this to some extent, but also just by a polynomial factor and require even more bits. It looks like building large systems of the same fidelity becomes exponentially harder, and just scaling an existing design will result in a system with exponentially lower fidelity. Most interesting algorithmic building blocks, like the QFT, require entanglement between all of the involved bits. More bits without the ability to produce larger entangled states seem to have a limited advantage, because it would be practically equivalent to using two computers or executing in sequences.

Optimism about improvements in basic gate fidelity and error correction aside, is there anything I am missing about errors in entangled states that gives rise to these predictions? To state this more precisely, is there any theoretical effect or algorithm that can be exploited to limit the error probability in entangled systems to be polynomial or at least subexponential in the size of the system?

midor

Posted 2021-02-14T18:10:14.250

Reputation: 199

Answers

6

It is true that fidelity decays exponentially in the course of quantum computation. This is indeed a major limitation of NISQ computers that imposes a stringent "depth budget". In order to overcome the decay, we need gates with fidelity so close to one that the decay is negligible over the course of quantum algorithms we intend to run. As you anticipated in your question, this is achieved using quantum error correction.

However, quantum error correction gives rise to exponential suppression of errors, not polynomial. Moreover, the asymptotic resource cost of the feat is polylogarithmic as a function of key variables. This enables the creation of logical gates with arbitrarily high fidelity and thus allows us to overcome exponential decay of fidelity. In practice, a challenge in building a quantum computer is that while the asymptotic overhead of quantum error correction is modest, the constant factors are fairly high.

Quantitatively, for a given quantum error correcting code there exist constants $c_1$ and $c_2$ and a threshold error rate $\epsilon_T$ such that the logical error rate $\epsilon_L$ is

$$ \epsilon_L \approx c_1 \left(c_2\frac{\epsilon_P}{\epsilon_T}\right)^{\left\lfloor\frac{d+1}{2}\right\rfloor}\tag1 $$

where $\epsilon_P$ is the physical error rate and $d$ is the code distance (c.f. equation $(2)$ on p.11 in this paper). Importantly, the resources required to build and operate a quantum computer using a quantum error correcting code of distance $d$ are polynomial in $d$. For example, in case of the Surface Code the number of physical qubits required to encode one logical qubit scales as $O(d^2)$. Thus, theoretically, if the physical error rate $\epsilon_P$ can be brought down below threshold then arbitrarily high logical fidelity should be achievable.

For a concrete example, suppose we have a computer with $\epsilon_P < \epsilon_T$ and we wish to run an algorithm that consists of $k$ gates on $q$ logical qubits and are willing to accept overall failure probability no greater than $p_f>0$. In order to meet these requirements, we need $\epsilon_L$ such that

$$ kq\epsilon_L \approx 1 - (1 - \epsilon_L)^{kq} < p_f. $$

Substituting from $(1)$ and carrying out the calculations, we find an upper bound on the necessary code distance

$$ \log kqc_1 + \left\lfloor\frac{d+1}{2}\right\rfloor \log \left(c_2\frac{\epsilon_P}{\epsilon_T}\right) \lt \log p_f \\ d \lt \frac{2\log \frac{kqc_1}{p_f}}{\log \frac{\epsilon_T}{c_2\epsilon_P}} = O\left(\log \frac{kq}{p_f}\right). $$

We see that the asymptotic overhead of quantum error correction is very modest. For example, using the Surface Code the number of physical qubits needed to run an algorithm with the above parameters is $O(q\log^2 \frac{kq}{p_f})$.

For an informative discussion of exponential suppression of errors in concatenated codes see section 10.6.1 in Nielsen & Chuang, especially the text around equation $(10.113)$ on page 481. For a recent experimental demonstration of exponential suppression of errors, see this paper (disclosure: I'm a co-author).

Adam Zalcman

Posted 2021-02-14T18:10:14.250

Reputation: 4 307

How is it possible to have so many authors? How did you people collaborate? (Sorry I am not casting any vote in this answer because my terrible level at quantum computing doesn't allow me to understand this answer). – user27286 – 2021-02-14T19:58:23.253

Well, a real quantum computer is a very complex system, so I suppose it's all out of necessity. That said, note that this isn't even close to the scope of collaboration required in certain other physics experiments, see for example this paper. The way it happens in practice is I suppose as anywhere else: via lots of meetings, emails, talks, delegation, task-group-forming, code reviews, document writing etc.

– Adam Zalcman – 2021-02-14T20:13:06.363

I see. I am new to all these things so please don't mind if I said something offending. The paper you mentioned has two groups of people. Compared to that this is less. I found it interesting that's why said. – user27286 – 2021-02-14T20:19:22.413

I will have to re-read that section of Nielsen & Chuang to fully understand this, because I don't see the connection between the threshold theorem and the increasing number of entangled qubits yet. It is clear that a fault tolerant code of arbitrary fidelity can be constructed if a basic level of fidelity can be achieved, but how this would limit the impact of more bits adding more chance of interaction with the environment isn't. My question is not about gate fidelity, but about how many qubits have to remain free of error if the entire system is entangled. Or are these equivalent questions? – midor – 2021-02-15T23:05:04.340

Regarding limiting the impact of more qubits increasing the chances of interaction with environment: This works the same way as in classical ECCs. On one hand, as we increase the number of physical qubits per logical qubit our scheme benefits from increasing redundancy. On the other hand, error per qubit remains constant. In the classical case, suppose you have noisy bits that spontaneously flip with probability $p$. Encoding one logical bit as a series of $k$ noisy physical bits (known as the repetition code) reduces error probability to $\approx p^{\lfloor\frac{k}{2}\rfloor}$. – Adam Zalcman – 2021-02-15T23:22:26.820

Regarding the number of qubits that must remain error-free: We assume that noise affects all qubits in the system, as in the classical case. Naturally, not all noisy qubits fail all at once. The maximum number of errors that can be corrected depends on the specific code. For example, in the case of the Surface Code it is $\lfloor\frac{d}{2}\rfloor$ where $d$ is the code distance. – Adam Zalcman – 2021-02-15T23:26:11.250

Thanks for the detailed explanation! I think I see where I went wrong now: even though an error on a single qubit possibly flips all bits of an entangled states, whereas in a classical system it flips only one, it is still just a single error to correct. Since I can't know what the significance of this bit is I can also not make any assumptions about either of the systems being "more correct" after the error. So although an error in an entangled system may flip all the bits it is no worse than any other single qubit error, and no harder to correct. Is that the right way to think about this? – midor – 2021-02-16T23:05:11.387

Your conclusion that a single-qubit error is no harder to correct on an entangled state than on a product state is valid. However, it is not true that an error on a single qubit flips all bits if the state is entangled. Such error on any state only affects one qubit, just like in classical case. For example, bit flip on the first qubit of the GHZ state $|000\rangle+|111\rangle$ results in $|100\rangle + |011\rangle$. As in classical case, we see that the first qubit is different from the other two and by majority voting we flip the first qubit to restore the original state. – Adam Zalcman – 2021-02-16T23:19:44.717

In fact, entanglement makes things easier for error correction by enabling us to measure observables such as the parity of all qubits without measuring individual qubits. This is important because we want to avoid collapsing the state of individual qubits. – Adam Zalcman – 2021-02-16T23:23:02.863

Are you referring to syndrome measurements, or is this another mechanism? – midor – 2021-02-16T23:39:44.267

Yes, I'm referring to syndrome measurements. – Adam Zalcman – 2021-02-16T23:53:48.303