8

One of the biggest drawbacks of Bayesian learning against deep learning is runtime: applying Bayes' theorem requires knowledge on how the data is distributed, and this usually requires either expensive integrals or some sampling mechanism (with the corresponding drawbacks).

Since at the end of the day is all about distribution propagations, and this is (as far as I understand) the nature of quantum computing, is there a way to perform these efficiently? If yes, what limitations do apply?

**Edit** (*directly related links*):

There hasn't been a lot of work on this (that I know of). For Bayesian networks, there is 1404.0055, in which the author uses a variation of Grover search to obtain a quadratic speed-up. On the related topic of Markov Models there are also few things, see references on the wiki and 1611.08104. I am not qualified enough to build an answer out of these though.

– glS – 2018-03-31T08:02:59.150@glS just wanted to tell you about HC's answer, looks really interesting (in case you didn't know about that paper). Thanks a lot for your references and brief explanations too, if you want to ellaborate some answer I will be glad to upvote it – fr_andres – 2018-04-01T21:46:26.500