What's the relationship between output of qubit measurements and classification of data in Quantum Machine Learning?



I'm training a model in Q# which has more than 2 features.
I have trouble understanding the following things:

  • How is the data classified based on the qubit states?

For example: If I have only 2 features (and want to classify our input as class A and class B) then only a single qubit would be used.
After measuring if the qubit turns out to be in the 0 state then its class A and in case of 1 its class B (or vice-versa).

But now if I have say 4 features then 2 qubits would be used and i could have 4 possible outcomes:

$$ |00\rangle,\quad |01\rangle,\quad |10\rangle,\quad |11\rangle. $$

So how would the data be classified based on these states?

  • Also am I correct in thinking that all the qubits are measured or is only 1 qubit measured and classified as class A/B?

Shreyas Pradhan

Posted 2020-06-21T09:42:44.493

Reputation: 83



Only one qubit is measured; the frequency of getting 0/1 result in this measurement (rather than a single measurement result) is used together with the bias to assign the class.

If you dig into the source code of the QML library, the measurement is performed in EstimateClassificationProbability operation, which measures the last qubit of the register.

Mariia Mykhailova

Posted 2020-06-21T09:42:44.493

Reputation: 6 616