3

I am quite stumped by the fact that the roadmaps for quantum computers as given by IBM, Google, IonQ, etc. seem to imply a linear/exponential growth in the size of their quantum computers.

Naively, I would assume that bigger systems are much harder to engineer, especially if one wanted to be able to entangle all of their bits, because only the undisturbed outcome is correct. Given a fidelity where $P[0_i] = x$ means no error occurs on bit $i$ with probability $x$, it would seem to me that an entangled system of size $n$ would have probability $P[0_1, ..., 0_n] = x^n$. The probability of error grows exponentially, if $x < 1.0$, which is of course true for all practical systems.

ECCs mitigate this to some extent, but also just by a polynomial factor and require even more bits. It looks like building large systems of the same fidelity becomes exponentially harder, and just scaling an existing design will result in a system with exponentially lower fidelity. Most interesting algorithmic building blocks, like the QFT, require entanglement between all of the involved bits. More bits without the ability to produce larger entangled states seem to have a limited advantage, because it would be practically equivalent to using two computers or executing in sequences.

Optimism about improvements in basic gate fidelity and error correction aside, is there anything I am missing about errors in entangled states that gives rise to these predictions? To state this more precisely, is there any theoretical effect or algorithm that can be exploited to limit the error probability in entangled systems to be polynomial or at least subexponential in the size of the system?

How is it possible to have so many authors? How did you people collaborate? (Sorry I am not casting any vote in this answer because my terrible level at quantum computing doesn't allow me to understand this answer). – user27286 – 2021-02-14T19:58:23.253

Well, a real quantum computer is a very complex system, so I suppose it's all out of necessity. That said, note that this isn't even close to the scope of collaboration required in certain other physics experiments, see for example this paper. The way it happens in practice is I suppose as anywhere else: via lots of meetings, emails, talks, delegation, task-group-forming, code reviews, document writing etc.

– Adam Zalcman – 2021-02-14T20:13:06.363I see. I am new to all these things so please don't mind if I said something offending. The paper you mentioned has two groups of people. Compared to that this is less. I found it interesting that's why said. – user27286 – 2021-02-14T20:19:22.413

I will have to re-read that section of Nielsen & Chuang to fully understand this, because I don't see the connection between the threshold theorem and the increasing number of entangled qubits yet. It is clear that a fault tolerant code of arbitrary fidelity can be constructed if a basic level of fidelity can be achieved, but how this would limit the impact of more bits adding more chance of interaction with the environment isn't. My question is not about gate fidelity, but about how many qubits have to remain free of error if the entire system is entangled. Or are these equivalent questions? – midor – 2021-02-15T23:05:04.340

Regarding limiting the impact of more qubits increasing the chances of interaction with environment: This works the same way as in classical ECCs. On one hand, as we increase the number of physical qubits per logical qubit our scheme benefits from increasing redundancy. On the other hand, error

per qubitremains constant. In the classical case, suppose you have noisy bits that spontaneously flip with probability $p$. Encoding one logical bit as a series of $k$ noisy physical bits (known as the repetition code) reduces error probability to $\approx p^{\lfloor\frac{k}{2}\rfloor}$. – Adam Zalcman – 2021-02-15T23:22:26.820Regarding the number of qubits that must remain error-free: We assume that noise affects all qubits in the system, as in the classical case. Naturally, not all noisy qubits fail all at once. The maximum number of errors that can be corrected depends on the specific code. For example, in the case of the Surface Code it is $\lfloor\frac{d}{2}\rfloor$ where $d$ is the code distance. – Adam Zalcman – 2021-02-15T23:26:11.250

Thanks for the detailed explanation! I think I see where I went wrong now: even though an error on a single qubit possibly flips all bits of an entangled states, whereas in a classical system it flips only one, it is still just a single error to correct. Since I can't know what the significance of this bit is I can also not make any assumptions about either of the systems being "more correct" after the error. So although an error in an entangled system may flip all the bits it is no worse than any other single qubit error, and no harder to correct. Is that the right way to think about this? – midor – 2021-02-16T23:05:11.387

Your conclusion that a single-qubit error is no harder to correct on an entangled state than on a product state is valid. However, it is not true that an error on a single qubit flips all bits if the state is entangled. Such error on any state only affects one qubit, just like in classical case. For example, bit flip on the first qubit of the GHZ state $|000\rangle+|111\rangle$ results in $|100\rangle + |011\rangle$. As in classical case, we see that the first qubit is different from the other two and by majority voting we flip the first qubit to restore the original state. – Adam Zalcman – 2021-02-16T23:19:44.717

In fact, entanglement makes things easier for error correction by enabling us to measure observables such as the parity of all qubits without measuring individual qubits. This is important because we want to avoid collapsing the state of individual qubits. – Adam Zalcman – 2021-02-16T23:23:02.863

Are you referring to syndrome measurements, or is this another mechanism? – midor – 2021-02-16T23:39:44.267

Yes, I'm referring to syndrome measurements. – Adam Zalcman – 2021-02-16T23:53:48.303