How does magic state distillation overhead scale compare to quantum advantages?


I'm interested in the model of quantum computation by magic state injection, that is where we have access to the Clifford gates, a cheap supply of ancilla qubits in the computational basis, and a few expensive-to-distill magic states (usually those that implement S, T gates). I've found that the best scaling is logarithmic in the accuracy $\varepsilon$, specifically $O(\log^{1.6}(1/\varepsilon)$ is what a 2012 paper offers to get the accuracy we need in the $S,T$ states.

Is this enough to calculate most of the problems we're interested in? Are there any problems that specifically resist QCSI (Quantum Computation by State Injection) because of high overhead, but are more solvable in other models of computation?

Emily Tyhurst

Posted 2018-03-19T05:50:31.297

Reputation: 945



In the context of scalable quantum computing, the polylog scaling needed for magic state distillation should not be a problem.

Indeed, it is not the only polylog scaling we need to contend with. Using the $S$ and $T$ gates to approximate a general single qubit rotation can have a similar cost when using the Solvay-Kitaev algorithm (though this is no longer state-of-the-art). The cost of error correction is also similar to that of MSD. In fact, it has been shown "that magic state factories have space-time costs that scale as a constant factor of surface code costs".

Within a scalable and fault-tolerant quantum computer, I see no reason to think that MSD will have a problematic overhead. We may find other methods that are better, such as ways to implement complex error correcting codes that allow transversal non-Clifford gates. But those will not be so great at error correction, and so have higher overheads for that. This could easily remove any benefits.

James Wootton

Posted 2018-03-19T05:50:31.297

Reputation: 9 708