1

When studying the surface code with phenomenological or circuit noise, the syndrome measurement is noisy, and therefore one has to repeat the syndrome measurement for several rounds before decoding. In all literature about surface code, people repeat the measurement for $d$ rounds, where $d$ is code distance. But why is that?

There are several reasons for why $\Theta(d)$ rounds are necessary and sufficient, and the one which baffles me is due to "temporal" or "timelike" logical error. Specifically, one can imagine a 3D space-time "syndrome graph" or "decoder graph", where time goes upwards. A "temporal" or "timelike" logical error is a path that links the bottom boundary with the top boundary.

I find it difficult to appreciate the consequence of such error, either in a lattice surgery circuit, or in an identity circuit without computation. I guess my issue is that I tend to project such bottom-to-top path onto the 2D spacelike plane, and then simply treat it as a spacelike error.

Specifically, I tend to focus on the post-correction residual data error $R$ and decompose it as $R=L\cdot E$, where $E$ is an operator with the same syndrome as $R$ and with minimum weight, and $L$ is a logical operator. I care about two probabilities:

- $p$: the probability that $L$ is nontrivial,
- $q$: conditioned that $L$ is trivial, the probability of $E$.

I understand that $p$ and $q$ both depend on the number of measurement rounds. But I don't see why $\Theta(d)$ is necessary and sufficient for (a) $\lim_{d\rightarrow\infty}p=0$, or (b) $q$ decays exponentially with $|E|$. (Assume iid circuit noise.)

Can anyone elaborate why a bottom-to-top path is an issue, either in a lattice surgery circuit, or in an identity circuit without computation?