The term *quantum supremacy*, as introduced by Preskill in 2012 (1203.5813), can be defined by the following sentence:

We therefore hope to hasten the onset of the era of quantum supremacy, when we
will be able to perform tasks with controlled quantum systems going beyond what
can be achieved with ordinary digital computers.

Or, as wikipedia rephrases it, *quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers practically cannot*.

It should be noted that this is *not* a precise definition in the mathematical sense. What you can make precise statements on is how the complexity of a given problem scales with the dimension of the input (say, the number of qubits to be simulated, if one is dealing with a simulation problem).
Then, if it turns out that quantum mechanics allows solving the same problem more efficiently (and, crucially, you are able to prove it), then there is room for a quantum device to demonstrate (or rather, provide evidence towards) *quantum supremacy* (or *quantum advantage*, or however you prefer to call it, see for example the discussion in the comments here).

So, in light of the above, *when exactly can one claim to have reached the quantum supremacy regime*? At the end of the day, there is no single *magic number* that brings you from the "classically simulatable regime" to the "quantum supremacy regime", and this is more of a continuous transition, in which one gathers more and more evidence towards the statements that quantum mechanics can do better than classical physics (and, in the process, provide evidence against the Extended Church-Turing thesis).

On the one hand, there are regimes which obviously fall into the "quantum supremacy regime". This is when you manage to solve a problem with a quantum device that you just *cannot* solve with a classical device. For example, if you manage to factorize a huge number that would take the age of the universe to compute with any classical device (and assuming someone managed to *prove* that Factoring is indeed classical hard, which is far from a given), then it seems hard to refute that quantum mechanics does indeed allow to solve some problems more efficiently than classical devices.

But the above is not a good way to think of quantum supremacy, mostly because one of the main points of quantum supremacy is as an intermediate step before being able to solve practical problems with quantum computers. Indeed, in the quest for quantum supremacy, one relaxes the requirement of trying to solve *useful* problems and just tries to attack the principle that at least for *some* tasks, quantum mechanics does indeed provide advantages.

When you do this and ask for the *simplest possible device that can demonstrate quantum supremacy*, things start to get tricky. You want to find the threshold above which quantum devices are *better* than classical ones, but this amounts to *compare two radically different kinds of devices, running radically different kinds of algorithms*.
There is no easy (known?) way to do this.
For example, do you take into account how expensive it was to build the two different devices? And what about comparing a general purpose classical device with a special purpose quantum one? Is that fair?
What about *validating* the output of the quantum device, is that required? Also, how strict do you require your complexity results to be?
A proposed reasonable list of criteria for a quantum supremacy experiment, as given by Harrow and Montanaro (nature23458, paywalled), is$^1$:

- A well-defined computational problem.
- A quantum algorithm solving the problem which can run on a near-term hardware capable of dealing with noise and imperfections.
- A number of computational resources (time/space) allowed to any classical competitor.
- A small number of well-justified complexity-theoretic assumptions.
- a verification method that can efficiently distinguish between the performances of the quantum algorithm from any classical competitor using the allowed resources.

To better understand the issue one may have a look at the discussions around D-Wave's claims in 2005 of a "$10^8$ speedup" with their device (which holds only when using appropriate comparisons).
See for example discussions on this Scott Aaronson's blog post and references therein (and, of course, the original paper by Denchev et al. (1512.02206)).

Also regarding the exact thresholds separating the "classical" from the "quantum supremacy" regime, one may have a look at the discussions around the number of photons required to claim quantum supremacy in a boson sampling experiment.
The reported number was initially around 20 and 30 (Aaronson 2010, Preskill 2012, Bentivegna et al. 2015, among others), then briefly went as low as seven (Latmiral et al. 2016), and then up again as high as ~50 (Neville et al. 2017, and you may have a look at the brief discussion of this result here).

There are *many* other similar examples that I didn't mention here. For example there is the whole discussion around quantum advantage via IQP circuits, or the number of qubits that are necessary before one cannot simulate classically a device (Neill et al. 2017, Pednault et al. 2017, and some other discussions on these results).
Another nice review I didn't include above is this Lund et al. 2017 paper.

(1) I'm using here the rephrasing of the criteria as given in Calude and Calude (1712.01356).

As a footnote: with respect to your question "Does it have to be the same algorithm?", a quantum computer can only achieve an advantage over a classical computer by using a

radically differentalgorithm. The reason is simple: quantum computers would not achieve an advantage by performing operationsmore quickly(certainly not in their current state of development, and possibly not ever) but by performingfeweroperations, which do not correspond to sensible operations that a conventional computer could be made to do. – Niel de Beaudrap – 2018-03-13T09:58:52.883So, just to make sure: With Google's announcement of the 72-qubit Bristlecone chip and the largest number of qubits simulated to my knowledge being 56 qubits we could reach that as soon as Google has proven their chip?

– blalasaadri – 2018-03-13T10:00:10.5902Provided that the qubits in the Google chip are stable enough, and the error rates in the operations low enough, that one could perform enough operations to do something which is difficult to simulate classically before the memory decoheres — then yes, that

couldbe the first "quantum ascendancy" event. In principle, it makes a lot of sense to talk about the ascendancy of any given architecture, of which Google's Bristlecone is one example. But as a piece of historical trivia, it would be interesting to note who was first to the mark, and Google may end up being the first. – Niel de Beaudrap – 2018-03-13T10:07:36.960