0

In regards to "Classification with Quantum Neural Networks on near term processors" (which you can find here) , there are still a few things that do not make entirely sense to me.

First of all, why is the architecture called a "neural network"? As far as my knowledge goes, in classical ML, the fundamental unit of Neural Networks is the perceptron. However, in Farhi and Neven's paper, they never mention any form of Quantum Perceptron.

Secondly, I do not understand the figure describing the architecture (it's Figure 1 in the paper). If you look closely, there is some cross wiring, etc. (I have to admit, the figure looks like a Neural Network, but it doesn't make sense to me). I was able to implement the Subset Parity Problem using Qiskit (because nonetheless, I understand the math behind the model), and the circuit I got doesn't look like the one in the figure.

I hope you can clarify this for me.

Thank you in advance!

The transfer function for each perceptron is a non-linear function that takes a linear combination of inputs to produce an output. In the simplest case we try to approximate a function using as linear combination of functions (perceptrons). In qnn we already have our transfer functions defined ... they are our unitary rotations. Arbitary unitary operators can be decomposed into a sequence of controlled rotations, hence in a qnn we aim to approximate the function, an arbitary unitary, by searching for the sequence of rotations, in this case our parameters to adjust are the angles. – Sam Palmer – 2020-05-27T18:07:31.700

to answer your question more directly, to me the definition of a neural network, now more a buzzword, is the application of a parametrisable combination of functions used to approximate a target function, whether this is 'wires and weights' or rotations. – Sam Palmer – 2020-05-27T18:20:30.350

1@SamPalmer but, if I'm reading the paper correctly, there is a crucial difference in that there is no nonlinearity in these "QNNs". In NNs, the nonlinearity component is pivotal, lest the whole function being linear and not able to capture complex patterns. There is nothing of that in the QNN (nor there could be, unless measurements are introduced). Their circuit is trained to reproduce unitary relations between input and output states. Tbh the only thing I see in common with NNs is the use of a parametrisation in terms of a sequence of trained maps.. – glS – 2020-05-27T20:22:21.510

1whether that's enough to justify the terminology, is a matter of opinion. IMHO it is a bit of a stretch, but others will disagree. Regardless, I don't see how this question can be answered objectively, in lack of a strict definition of what a "QNN" should be (which there isn't). Also, @Skyris each post should contain a single question. You can create different posts to ask different questions. – glS – 2020-05-27T20:26:25.857

I agree, but these days what is going to keep you more relevant in search and citations, quantum neural network (sounds pretty awesome) or a universal unitary approximator circuit. Don't get me started on the overuse and hype of 'data science' buzzwords! However there a flavours of neural networks, and the full term quantum protects it somewhat of trying to be a direct implementation of a classical NN, I'm not well versed in qnns but it may be that this is the closest type of approximator to a nn that you can achieve on a quantum processor, which somewhat validates usage of the term qnn. – Sam Palmer – 2020-05-28T01:53:21.307