Can quantum computing contribute to the development of artificial intelligence?



I am interested how quantum computing can contribute to the development of artificial intelligence, I did some searching, but could not find much. Does somebody have an idea (or speculations)?

jennifer ruurs

Posted 2019-10-01T13:29:22.533

Reputation: 203

2Here is a list of technical resources on quantum machine learning; are you looking for answers more on the technical side or high-level nontechnical side? For example, less about how quantum ML works and more about how it might affect the field of ML. – ahelwer – 2019-10-01T13:40:11.353

@ahelwer Thank you for the list (which is a great starting point!), I am interested in the high-level nontechnical side – jennifer ruurs – 2019-10-01T14:30:20.563


see also To what extent can quantum computers help to develop Artificial Intelligence? over on ai.SE

– glS – 2019-10-08T14:32:49.923



Much of the research on quantum algorithms that may have applications to AI is centered on quantum machine learning (QML).

While I'd argue there are quite a few hypothetical reasons that QML could be used in machine learning some time in the future, QML research is in its infancy relative to classical machine learning research and its practical benefits aren’t quite clear. Here’s a broad overview of some general themes that appear to be emerging:

  1. We know that QML can provide algorithmic speedups. Much of the work in QML has been based on the HHL algorithm (also known as the quantum linear system algorithm). While somewhat controversial, the speedups of QML algorithms based on the HHL algorithm were originally thought to be exponential (given some quite severe caveats, see this paper by Aaronson and another by Childs). Now, however, the speedups are generally believed to be polynomial (in many cases) due to the assumptions made in developing many QML algorithms and dequantization arguments (see here and here). For a nice overview, see this Quanta article. Also note that some of the caveats discussed in the links above have been addressed by refinements to the HHL algorithm.

  2. It's unclear to what extent quantum information enhances or limits model representations. When developing a machine learning model, one of the central goals is to generate a learned representation (e.g. the parameters of a model) that enables us to make accurate predictions. An interesting question in this respect is whether using qubits to generate a learned representation offers any advantage over the standard (classical) representations currently being generated. Intuitively, quantum representations may have advantages for certain problems that inherently involve quantum effects, such as molecular simulation. As far as I know, whether such advantages actually exist and extend to problems that are purely classical is an open question.

  3. The linearity of quantum mechanics represents a challenge to developing certain kinds of QML approaches. Many classical machine learning algorithms benefit heavily from non-linear functions (think of the sigmoid or hyperbolic tangent activation functions sometimes used in neural networks). Quantum mechanics is necessarily linear (if I remember correctly, in his book Quantum Computing Since Democritus, Aaronson notes that if quantum mechanics were non-linear then $\mathbf{P}=\mathbf{NP}$). Given that most contemporary AI is powered by 'deep learning' approaches (that mostly means neural networks), it's unclear to what extent our current classical approaches can simply be translated into some quantum version. I suspect that this problem is not insurmountable and that we may yet find clever ways to build QML algorithms that are both efficient and have many of the benefits of existing approaches that exploit non-linearities (along with other benefits, perhaps). Already, quantum machine learning research has revealed new insights into a variety of machine learning problems; the Quanta article above provides one clear example.

  4. Like their classical counterparts, QML algorithms will require regularization. It's been argued that some existing QML algorithms would massively overfit the data, severely limiting generalizability (for example, Peter Wittek notes the issue here while discussing quantum support vector machines). In classical machine learning, to ensure the ability of a model to generalize, we usually make use of some kind of regularization technique. I haven't yet seen any research around a QML approach to regularization; perhaps someone else can comment on whether such approaches have been proposed.

There's certainly more that could be discussed relating to this question and, to be transparent, I wouldn't count myself as an expert today. I hope more answers roll in but, until then, perhaps the information above can provide some context around what I would suggest the answer to your question is: We don't know, yet.


Posted 2019-10-01T13:29:22.533

Reputation: 663

1On #3, I'm not sure if that that matters much to QML algorithms. The non-linearity, in many neural networks is only there for the shape of the function, and can be heavily approximated to the point where your activation function is actually a function of 8 bit values. You have a lot of leeway when it comes to these functions. – whn – 2019-10-07T21:15:16.057

Yes, I suspected approximation would probably be sufficient, but thought it worth noting above. Along these lines, many of the activation functions widely used in contemporary image processing NNs (i.e. ReLU and its siblings) are linear. – Greenstick – 2019-10-07T22:03:42.803

My understanding has progressed since I originally wrote; I'll post an update in the near future. – Greenstick – 2019-11-19T00:01:20.640