# Quantum annealing

Quantum annealing is a model of quantum computation which, roughly speaking, generalises the adiabatic model of computation. It has attracted popular — and commercial — attention as a result of D-WAVE's work on the subject.

Precisely what quantum annealing *consists* of is not as well-defined as other models of computation, essentially because it is of more interest to quantum technologists than computer scientists. Broadly speaking, we can say that it is usually considered by people with the motivations of engineers, rather than the motivations of mathematicians, so that the subject appears to have many intuitions and rules of thumb but few 'formal' results. In fact, in an answer to my question about quantum annealing, `Andrew O`

goes so far as to say that "*quantum annealing can't be defined without considerations of algorithms and hardware*". Nevertheless, "quantum annealing" seems is well-defined enough to be described as a way of approaching how to solve problems with quantum technologies with specific techniques — and so despite `Andrew O`

's assessment, I think that it embodies some implicitly defined model of computation. I will attempt to describe that model here.

### Intuition behind the model

Quantum annealing gets its name from a loose analogy to (classical) *simulated annealing*.
They are both presented as means of minimising the energy of a system, expressed in the form of a Hamiltonian:
$$
\begin{aligned}
H_{\rm{classical}} &= \sum_{i,j} J_{ij} s_i s_j \\
H_{\rm{quantum}} &= A(t) \sum_{i,j} J_{ij} \sigma_i^z \sigma_j^z - B(t) \sum_i \sigma_i^x
\end{aligned}
$$
With simulated annealing, one essentially performs a random walk on the possible assignments to the 'local' variables $s_i \in \{0,1\}$, but where the probability of actually making a transition depends on

- The difference in 'energy' $\Delta E = E_1 - E_0$ between two 'configurations' (the initial and the final global assignment to the variables $\{s_i\}_{i=1}^n$) before and after each step of the walk;
- A 'temperature' parameter which governs the probability with which the walk is allowed to perform a step in the random walk which has $\Delta E > 0$.

One starts with the system at 'infinite temperature', which is ultimately a fancy way of saying that you allow for all possible transitions, regardless of increases or decreases in energy. You then lower the temperature according to some schedule, so that time goes on, changes in state which increase the energy become less and less likely (though still possible). The limit is zero temperature, in which any transition which decreases energy is allowed, but any transition which increases energy is simply forbidden.
For any temperature $T > 0$, there will be a stable distribution (a 'thermal state') of assignments, which is the uniform distribution at 'infinite' temperature, and which is which is more and more weighted on the global minimum energy states as the temperature decreases. If you take long enough to decrease the temperature from infinite to near zero, you should in principle be guaranteed to find a global optimum to the problem of minimising the energy. Thus simulated annealing is an approach to solving optimisation problems.

Quantum annealing is motivated by generalising the work by Farhi *et al.* on **adiabatic** quantum computation [arXiv:quant-ph/0001106], with the idea of considering what evolution occurs when one **does not necessarily** evolve the Hamiltonian in the adiabatic regime. Similarly to classical annealing, one starts in a configuration in which "classical assignments" to some problem are in a uniform distribution, though this time in coherent superposition instead of a probability distribution: this is achieved for time $t = 0$, for instance, by setting
$$ A(t=0) = 0, \qquad B(t=0) = 1 $$
in which case the uniform superposition $\def\ket#1{\lvert#1\rangle}\ket{\psi_0} \propto \ket{00\cdots00} + \ket{00\cdots01} + \cdots + \ket{11\cdots11}$ is a minimum-energy state of the quantum Hamiltonian. One steers this 'distribution' (*i.e.* the state of the quantum system) to one which is heavily weighted on a low-energy configuration by slowly evolving the system — by slowly changing the field strengths $A(t)$ and $B(t)$ to some final value
$$ A(t_f) = 1, \qquad B(t_f) = 0. $$
Again, if you do this slowly enough, you will succeed with high probability in obtaining such a global minimum.
The *adiabatic regime* describes conditions which are **sufficient** for this to occur, by virtue of remaining in (a state which is very close to) the ground state of the Hamiltonian at all intermediate times. However, it is considered possible that one can evolve the system faster than this and still achieve a high probability of success.

Similarly to adiabatic quantum computing, the way that $A(t)$ and $B(t)$ are defined are often presented as a linear interpolations from $0$ to $1$ (increasing for $A(t)$, and decreasing for $B(t)$). However, also in common with adiabatic computation, $A(t)$ and $B(t)$ don't necessarily have to be linear or even monotonic. For instance, D-Wave has considered the advantages of pausing the annealing schedule and 'backwards anneals'.

'Proper' quantum annealing (so to speak) presupposes that evolution is probably not being done in the adiabatic regime, and allows for the possibility of diabatic transitions, but only asks for a high chance of achieving an optimum — or even more pragmatically still, of achieving a result which would be difficult to find using classical techniques. There are no *formal* results about how quickly you can change your Hamiltonian to achieve this: the subject appears mostly to consist of experimenting with a heuristic to see what works in practise.

### The comparison with classical simulated annealing

Despite the terminology, it is not immediately clear that there is much which quantum annealing has in common with classical annealing.
The main differences between quantum annealing and classical simulated annealing appear to be that:

In quantum annealing, the state is in some sense ideally a pure state, rather than a mixed state (corresponding to the probability distribution in classical annealing);

In quantum annealing, the evolution is driven by an explicit change in the Hamiltonian rather than an external parameter.

It is possible that a change in presentation could make the analogy between quantum annealing and classical annealing tighter. For instance, one could incorporate the temperature parameter into the spin Hamiltonian for classical annealing, by writing
$$\tilde H_{\rm{classical}} = A(t) \sum_{i,j} J_{ij} s_i s_j - B(t) \sum_{i,j} \textit{const.} $$
where we might choose something like $A(t) = t\big/(t_F - t)$ and $B(t) = t_F - t$ for $t_F > 0$ the length of the anneal schedule. (This is chosen deliberately so that $A(0) = 0$ and $A(t) \to +\infty$ for $t \to t_F$.)
Then, just as an annealing algorithm is governed in principle by the Schrödinger equation for all times, we may consider an annealing process which is governed by a diffusion process which is in principle uniform with tim by small changes in configurations, where the probability of executing a randomly selected change of configuration is governed by
$$ p(x \to y) = \max\Bigl\{ 1,\; \exp\bigl(-\gamma \Delta E_{x\to y}\bigr) \Bigr\} $$
for some constant $\gamma$, where $E_{x \to y}$ is the energy difference between the initial and final configurations.
The stable distribution of this diffusion for the Hamiltonian at $t=0$ is the uniform distribution, and the stable distribution for the Hamiltonian as $t \to t_F$ is any local minimum; and as $t$ increases, the probability with which a transition occurs which increases the energy becomes smaller, until as $t \to t_F$ the probability of any increases in energy vanish (because *any* of the possible increase is a costly one).

There are still disanalogies to quantum annealing in this — for instance, we achieve the strong suppression of increases in energy as $t \to t_F$ essentially by making the potential wells infinitely deep (which is not a very physical thing to do) — but this does illustrate something of a commonality between the two models, with the main distinction being not so much the evolution of the Hamiltonian as it is the difference between diffusion and Schrödinger dynamics. This suggests that there may be a sharper way to compare the two models theoretically: by describing the difference between classical and quantum annealing, as being analogous to the difference between random walks and quantum walks. A common idiom in describing quantum annealing is to speak of 'tunnelling' through energy barriers — this is certainly pertinent to how people consider quantum walks: consider for instance the work by Farhi *et al.* on continuous-time quantum speed-ups for evaluating NAND circuits, and more directly foundational work by Wong on quantum walks on the line tunnelling through potential barriers. Some work has been done by Chancellor [arXiv:1606.06800] on considering quantum annealing in terms of quantum walks, though it appears that there is room for a more formal and complete account.

On a purely operational level, it appears that quantum annealing gives a *performance* advantage over classical annealing (see for example these slides on the difference in performance between quantum vs. classical annealing, from Troyer's group at ETH, ca. 2014).

### Quantum annealing as a phenomenon, as opposed to a computational model

Because quantum annealing is more studied by technologists, they focus on the concept of *realising quantum annealing as an effect* rather than defining the model in terms of general principles. (A rough analogy would be studying the unitary circuit model only inasmuch as it represents a means of achieving the 'effects' of eigenvalue estimation or amplitude amplification.)

Therefore, whether something counts as "quantum annealing" is described by at least some people as being hardware-dependent, and even input-dependent: for instance, on the layout of the qubits, the noise levels of the machine. It seems that even trying to approach the adiabatic regime will prevent you from achieving quantum annealing, because the idea of what quantum annealing even consists of includes the idea that noise (such as decoherence) will prevent annealing from being realised: as a *computational effect*, as opposed to a *computational model*, quantum annealing essentially requires that the annealing schedule is shorter than the decoherence time of the quantum system.

Some people occasionally describe noise as being somehow essential to the process of quantum annealing. For instance, Boixo *et al.* [arXiv:1304.4595] write

Unlike adiabatic quantum computing[, quantum annealing] is a positive temperature method involving an open quantum system coupled to a thermal bath.

It might perhaps be accurate to describe it as being an inevitable feature of systems in which one will perform annealing (just because noise is inevitable feature of a system in which you will do quantum information processing of *any* kind): as `Andrew O`

writes "*in reality no baths really help quantum annealing*". It is possible that a dissipative process can help quantum annealing by helping the system build population on lower-energy states (as suggested by work by Amin *et al.*, [arXiv:cond-mat/0609332]), but this seems essentially to be a classical effect, and would inherently require a quiet low-temperature environment rather than 'the presence of noise'.

### The bottom line

It might be said — in particular by those who study it — that quantum annealing is an effect, rather than a model of computation. A "quantum annealer" would then be best understood as "a machine which realises the effect of quantum annealing", rather than a machine which attempts to embody a model of computation which is known as '*quantum annealing*'. However, the same might be said of adiabatic quantum computation, which is — in my opinion correctly — described as a model of computation in its own right.

Perhaps it would be fair to describe quantum annealing as an approach to realising a very general *heuristic*, and that there is an implicit model of computation which could be characterised as the conditions under which we could expect this heuristic to be successful. If we consider quantum annealing this way, it would be a model which includes the adiabatic regime (with zero-noise) as a special case, but it may in principle be more general.

1MBQC can be seen as the underlying idea behind some error correcting codes, such as the surface code. Mainly in the sense that the surface code corresponds to a 3d lattice of qubits with a particular set of CZs between them that you then measure (with the actual implementation evaluating the cube layer by layer). But perhaps also in the sense that the actual surface code implementation is driven by measuring particular stabilizers. – Craig Gidney – 2018-09-17T15:51:14.967

1However, the way in which the measurement outcomes are used differ substantially between QECCs and MBQC. In the idealised case of no or low rate of uncorrelated errors, any QECC is computing the identity transformation at all times, the measurements are periodic in time, and the outcomes are heavily biased towards the +1 outcome. For standard constructions of MBQC protocols, however, the measurements give uniformly random measurement outcomes every time, and those measurements are heavily time-dependent and driving non-trivial evolution. – Niel de Beaudrap – 2018-09-17T15:57:03.420

1Is that a qualitative difference or just a quantitative one? The surface code also has those driving operations (e.g. braiding defects and injecting T states), it just separates them by the code distance. If you set the code distance to 1, a much higher proportion of the operations matter when there are no errors. – Craig Gidney – 2018-09-17T16:07:31.820

1I would say that the difference occurs at a qualitative level as well, from my experience actually considering the effects of MBQC procedures. Also, it seems to me that in the case of braiding defects and T-state injection that it is not the error correcting code itself, but deformations of them, which are doing the computation. These are certainly relevant things one may do with an error corrected memory, but to say that the code is doing it is about the same level as saying that it is qubits which do quantum computations, as opposed to operations which

one performs onthose qubits. – Niel de Beaudrap – 2018-09-17T16:21:16.233