Suppose a function $f\colon {\mathbb F_2}^n \to {\mathbb F_2}^n$ has the following curious property: There exists $s \in \{0,1\}^n$ such that $f(x) = f(y)$ if and only if $x + y = s$. If $s = 0$ is the only solution, this means $f$ is 1-to-1; otherwise there is a nonzero $s$ such that $f(x) = f(x + s)$ for all $x$, which, because $2 = 0$, means $f$ is 2-to-1.

What is the cost to any prescribed probability of success, on a classical or quantum computer, of distinguishing a uniform random 1-to-1 function from a uniform random 2-to-1 function satisfying this property, if each option (1-to-1 or 2-to-1) has equal probability?

*I.e.,* I secretly flip a coin fairly; if I get heads I hand you a black box (classical or quantum, resp.) circuit for a uniform random 1-to-1 function $f$, whereas if I get tails I hand you a black box circuit for a uniform random 2-to-1 function $f$. How much do you have to pay to get a prescribed probability of success $p$ of telling whether I got heads or tails?

This is the scenario of Simon's algorithm. It has esoteric applications in nonsensical cryptanalysis,^{*} and it was an early instrument in studying the complexity classes BQP and BPP and an early inspiration for Shor's algorithm.

Simon presented a quantum algorithm (§3.1, p. 7) that costs $O(n + |f|)$ qubits and expected $O(n \cdot T_f(n) + G(n))$ time for probability near 1 of success, where $T_f(n)$ is the time to compute a *superposition* of values of $f$ on an input of size $n$ and where $G(n)$ is the time to solve an $n \times n$ system of linear equations in $\mathbb F_2$.

Simon further sketched a proof (Theorem 3.1, p. 9) that a classical algorithm evaluating $f$ at no more than $2^{n/4}$ distinct *discrete* values cannot guess the coin with advantage better than $2^{-n/2}$ over a uniform random guess.

In some sense, this answers your question positively: A quantum computation requiring a *linear* number of evaluations of random function on a *quantum superposition* of inputs can attain much better success probability than a classical computation requiring an *exponential* number of evaluations of a random function on *discrete inputs*, in the size of the inputs. But in another sense it doesn't answer your question at all, because it could be that for *every particular* function $f$ there is a faster way to compute the search.

The Deutsch–Jozsa algorithm serves as a similar illustration for a slightly different artificial problem to study different complexity classes, P and EQP, figuring out the details of which is left as an exercise for the reader.

_{* Simon's is nonsensical for cryptanalysis because only an inconceivably confused idiot would feed their secret key into the adversary's quantum circuit to use on a quantum superposition of inputs, but for some reason it makes a splash every time someone publishes a new paper on using Simon's algorithm to break idiots' keys with imaginary hardware, which is how all these attacks work. Exception: It is possible that this might break white-box cryptography, but the security story for white-box cryptography even against classical adversaries is not promising.}

I would say the answer is no if you restrict the problems to be

decisionproblem, because there are sampling problems (e.g. BosonSampling and IQP) for which an exponential quantum advantage has been shown (or rather, proven under strong assumptions). There may be others that I don't know. – glS – 2018-03-15T14:49:59.640Note that there are already many subexponential-cost classical algorithms for factoring. (There is a substantial gap between polynomial and exponential costs.) – Squeamish Ossifrage – 2018-03-15T22:19:11.213

As heather says, this is currently not know since the limits of classical (and quantum) computers are not known. The criteria you set forth in your question ultimately require the answerer to go even beyond proving the relationship beyond P and NP. I'd suggest you reword your question to ask for other likely examples (as well as factoring). – Toby Hawkins – 2018-03-15T23:18:33.303

@SqueamishOssifrage As I mentioned in a comment on an answer to a different question, I'm going to paraphrase Nathan Wiebe and say that while Shor's algorithm may not 'follow the letter' of exponential speed-up, it follows the spirit – Mithrandir24601 – 2018-03-15T23:19:33.550

2

The

– Squeamish Ossifrage – 2018-03-15T23:28:13.910practicalconsequences of a quantum speedup,e.g.for whether Shor's algorithm canactuallyoutperform the classical GNFS, are also not necessarily implied byasymptoticrelations of the growth curves of the costs. See this answer for a bit more about the asymptoticvs.concrete setting, and why questions around P = NP are a bit of a red herring for cryptography and practical performance comparisons.@TobyHawkins I'm not sure what you mean by that. I obviously do not expect any answerer to prove anything. And as I mentioned in the previous comment, I also do not think that there are not known examples of proven (under strong cc assumptions) exponential quantum advantage. I'll maybe attempt to answer the question myself, but to my understanding there are not known such examples of

decision problems. As soon as one considers other kinds of problems, like sampling problems, that are known results. – glS – 2018-03-16T01:41:28.5331@SqueamishOssifrage Exactly. I'd like to add that equating membership of

Pwith 'efficient' is more wishful thinking by computer scientists than absolute truth. The idea is that, once it has shown that a problem lies inP, even if its something ghastly like $O(n^{1235436546})$, there will be improvements shaving it off to something similar to $O(n^3)$, a bit closer to the cosy 'conditional lower bounds'. To credit, this has usually happened in the past. But this is no guarantee and as for the practicality there even exist 'linear' algorithms that are considered 'unimplementable'. – Discrete lizard – 2018-03-19T08:49:51.193