Photonic Huge Quantum Advantage ???

This is a quick and preliminary post about a very recent announcement in a Science Magazine paper: Quantum computational advantage using photons by a group of researchers leaded by Jianwei Pan and Chao-Yang Lu. (Most of the researchers are from USTC in Hefei, China.)

The paper announces achieving  quantum advantage (aka “quantum supremacy”) using photonic implementation of BosonSampling.  (I heard about it from a SO post that contains further links.) The claimed advantage is huge and clearly I will have to look carefully at the results, the data, and the interpretation. The idea that this could be done was raised ten years ago by Aaronson and Arkhipov and we discussed it in here and in several other posts along with the idea that it cannot be done.

Boson Sampling was studied in a 2014 paper by Guy Kindler and me Gaussian Noise Sensitivity and BosonSampling. Our paper and the connection with noise sensitivity and the Fourier description is the basis for my argument against quantum computers.  Of course, a demonstration of a huge quantum advantage, as claimed in the new paper, if valid, would refute my theory.

The crux of the matter is if the statistical performance of the photonic samples produced in the experiment could be achieved by classical sampling. (This is referred to as “spoofing.”) My paper with Guy proposes a very simple way to try to do it based  on the low degree Hermite-Fourier truncation of the Boson Sampling distribution.

The easiest way to implement it is as follows: Given an n by m matrix you draw (with the appropriate weights based on repeated columns) at random n by n minor M (with repeated columns), then compute the degree k approximation X to the |permanent(M)|^2, (based on formula (8) from our paper) and then take the sample with probability according to the value of X and toss it away otherwise. This may work even for degree-2 truncation. (Rather than the straight truncation we can also try the “Beckner-noise” version but I don’t think this will make much difference.)

Since my paper with Guy Kindler offers “off the shelf” classic algorithm that may “spoof” the claims I propose to test it. (And I am a little surprised that this was not tried already.)

 

This entry was posted in Combinatorics, Physics, Probability, Quantum and tagged , . Bookmark the permalink.

14 Responses to Photonic Huge Quantum Advantage ???

  1. Jon Awbrey says:

    In the U.S. we have a method of electing Presidents called BozoSampling. Sometimes it works, sometimes not, but I don’t see any guarantee of supremacy to it.

  2. How do you reconcile this result with your impossibility theorem? Does it boil down to verifying that the results obtained by the new quantum computer are in fact correct, through classical means? But since the latter would take forever, does that mean it’s impossible to know who is right?

    • Gil Kalai says:

      As I wrote in the post I think that it is plausible that the approximation described in my paper with Guy Kindler will also pass the statistical tests of the paper and this would be quite easy to verify.

      • Thanks! That’s indeed an obvious thing to try, given your nice result. And I now understand that these verifications of quantum experiments are merely based on some partial statistics, which is probably the best one could hope for.

  3. Andy L says:

    Would it be possible to design a statistical test that would distinguish between this spoofed result and a correct result (by testing in the classical domain), and apply that to the system in the paper?

    This critique, as I understand it, applies only if the experimental apparatus is not accurately sampling from the distribution, but is instead sampling from a less exotic distribution, or from a computationally feasible noisy variant of the distribution.

    If the experimental apparatus is correctly sampling from the underlying distribution, then would you agree that the impossibility result in Aaronson & Arkhipov (2010) would mean that this does in fact demonstrate quantum supremacy?

    • Gil Kalai says:

      Dear Andy,

      “If the experimental apparatus is correctly sampling from the underlying distribution, then would you agree that the impossibility result in Aaronson & Arkhipov (2010) would mean that this does in fact demonstrate quantum supremacy?”

      Of course I agree!

      • Eviatar Yadai says:

        May I ask a similar question about the Google experiment?
        Can we define a test that can be verified on a classical computer? (for simplicity let’s assume a supercomputer not available).
        If for example, we will publish circuit C with ~30 bits which his linear cross-entropy test computed by Google in less than a minute and verified by classical computer at a complete month.
        Does such improvements with orders of magnitude would be enough for quantum supremacy demonstration?

      • Gil Kalai says:

        Eviatar, I would certainly regard a convincing experiment in the 30-40 qubit range as a big step toward “quantum advantage”.

        However, as you may know there is also the issue of running time for a classical algorithm. Let’s talk about the Google full 53 qubits experiment. There are by now classical algorithms (By a team from IBM: Edwin Pednault, John A. Gunnels, Giacomo Nannicini, Lior Horesh, and Robert Wisnieff) that give the full list of probabilities 6-orders of magnitude quicker than Google’s algorithm. (These new algorithms were not yet implemented.)

        On top of that there are 3 potential sources for further improvements

        a)  It is generally assumed  that to get a fidelity t you can save the classical running time by a factor of t. This means that we can expect improvement by 3 more orders of magnitudes for the IBM algorithm although we do not know how to do it. (Actually maybe some ideas from my paper with Guy Kindler could be useful for this as I commented in the discussions related to Google’s experiment.)

        b) Producing the probabilities (exact or approximate), not for all 2^53 bitstrings, but, say, for 20 millions bitstrings may also save running time. (But, as far as I know, it is not known how to use such an idea to improve overall running time.)

        c) There are computational complexity reasons that put (exact or approximate) sampling tasks as  easier computational tasks compared to computing  (exact or approximate) probabilities. We do not know if this applies in our case for classical hardness, and as far as I know, not how to use it algorithmically. 

        Having said all that, still, a reliable reproducible well-documented and successful experiment for random circuits sampling for 30-40 qubits will certainly go against my theory and predictions and will go a long way to convince me that quantum advantage might be possible. 

      • Eviatar Yadai says:

        Gil, thank you for your response and for your willingness to test your theory under defined and realistic tests.

        Points a,b and c (as I understand) are conditional with future research.
        I hope there would be some proceed but I think that, as opposed to questions as hardness etc., we do need definitive answers for facts as if we have a physical device that can compute faster than classical hardware.
        For me, the existence of such a device is marvelous (the first examples I encounter for this are the TWIRL and TWINKLE hypothesis devices) which should have a realistic answer as far as we can. If research improves classical performance I see no other option than repeats the tests.

        As for the IBM result, it’s a more subtle subject because it depends on the availability of supercomputers. It would be blessed if that would happen but it seems reasonable that supercomputers have other priorities and we can take that into account. It seems to me unreasonable to refute Google results based on supercomputers but prevent the option to verify their results with supercomputers. Symmetry should be preserved.

        Anyway, I hope there would be verification of results at the border of the classical regime as you suggest. Could you elaborate on why such verification hasn’t occurred? does Google respond to that challenge (formally or non formally)?

  4. Pingback: Photonic Quantum Profit? - JellyEnt

  5. Quentin F says:

    Dear Gil Kalai,

    The situation here seems reminiscent of the Google quantum advantage claim. It seems the claim here is made against a noiseless classical simulation and did not take into account noisy simulations. I also did not see any evidence that their sampling was robust. This brings to mind your remarks on their earlier 14 qubit experiment, that the fidelity then was compatible with the approximation you propose in your paper with Kindler.

    Indeed, isn’t it commonly believed that robust noiseless Boson Sampling would require quantum fault tolerance for large n? The fidelity here seems to be about 99% only, a level Wainthal et al. recently proved is very much in reach of classical computers for other quantum simulations. As you just highlighted in your discussion with Eviatar Yadai and Andy L, reaching the fidelities actually obtained by the experiments as opposed to computing the noiseless distributions might be another boon for classical simulation.

    Conversely, is there any reason to believe those qubits (Pan et al.) will not be chaotic, ie. depend on an exponential input describing the noise? As far I as remember *universal* linear photonic QC is only at 2 qubits (https://www.nature.com/articles/s41566-018-0236-y) because resource scaling is indeed exponential in the number of components when above a certain error threshold, but I could be wrong on this.

    Scott Aaronson has apologized for his initial hostility and appears to have conceded in this latest “skirmish”: in his most recent blogpost, he acknowledges that “there’s a decent chance that people will succeed in simulating the new experiment classically” and “Gil put his finger precisely on a key issue”, and indeed S. Boxio seems to agree with you about classical simulation. From what I could gather, it seems noise due to photon loss and other factors in Gaussian Boson Sampling might land it back into the classical distributions you describe (https://arxiv.org/abs/1905.12075 investigates this).

    In said blogpost, your algorithm with Kindler is seen as the best candidate to achieve a classical simulation. However, the authors have shown that the 2-mode distribution is indeed insufficient to spoof their results, and that they’re awaiting results for larger values of k. Was that surprising? k=2 seems low, and if the cost to get a correlation of k/n with the ideal distribution grows only like n^k, does this mean you expect this experiment will asymptotically be simulable by using larger values of k? Getting fidelities of 99% should be very tractable. Since Scott acknowledges the experiment might be classically simulable, even if your current algorithm is insufficient do you expect that a refinement of your algorithm could eventually achieve it regardless? (Indeed, see arXiv.2010.15595 for an efficient classical algorithm recently found for a different but related problem)

    Parenthetically, as pointed out in other comments there has not been a (reliable) verification for either this experiment or Google’s at the border of the classical regime. On this point, I second the concerns of Eviatar Yadai: do we have any idea why Google and others have not done this yet? It should be very easy and would massively support their argument. This absence seems a bit suspicious.

    ***

    Relatedly, this got me thinking about the rest of your conjectures. I apologize if this is a naive question, I just wanted to make sure I understood them correctly.

    Correct me if I’m wrong, but the crux of your argument against quantum advantage seems to be that (A) “Even with low noise levels, the outcomes can be approximated by low degree polynomials and simulated classicaly.” (B) “Pushing the noise even lower to the levels that supremacy and error-correction demands is not necessarily impossible per se, but the samples will be very chaotic”and then a separate prong is (C) “Chaotic samples are unable to demonstrate quantum advantage” (cf. our previous conversation on your blog).

    Perhaps (C) could be amended to (C2) “Chaotic samples are unable to demonstrate error correction and/or useful quantum algorithms”? It seems to me that if the Google and those more recent claims stand, it does not necessarily refute your conjectures outright, because the chaotic samples could have some low correlation with the noiseless distribution so for specifically chosen experiments they can effectively ‘simulate themselves’ and more specifically ‘do a chaotic simulation of themselves’ better than a classical computer. However, their lack or robustness would still doom any practical applications (such as Fault Tolerant QC). I believe this is Robert Alicki’s position (ie. a fundamental conflict between stability and reversibility of information processing, see https://doi.org/10.1142/S2010194514603536). Relatedly, the challenges quantum chaos poses to error correction have been investigated 20 years ago (https://arxiv.org/abs/quant-ph/0012119). I didn’t quite understand their conclusion, but it seems chaos has an impact of the complexity of the code? I couldn’t find more recent clear papers dealing with this besides yours and R. Alicki’s contributions. I think I gathered from all this that chaos kills useful quantum algorithms? Anyone more knowledgeable in physics, please feel free to correct me.

    It seems that fact that Google’s sample was chaotic and absolutely did not fit the conventional noise model has flown under the radar of the QC community, while it should be a huge wake up call. (Maybe Martinis is referring to this issue when he talks about qubit killers: https://twitter.com/Moor_Quantum/status/1314260754439245827)

    So, perhaps (A) and (B) are still correct, which those two recent experiments seem to support, but (C) was a tad too enthusiastic? In that case, perhaps a good compromise would be Proposition (D) ‘Quantum advantage is possible only when chaos is irrelevant’. Proposition (D) would still prevent any *useful* QC application, notably FTQC, I believe.

    ***

    Well it seems I got a bit carried away. I apologize for the long post… to make up for it, here is a summary:

    TL;DR: Since Scott Arronson admits this Boson Sampling experiment is likely classically spoofable (pending confirmation of course) and your previous analysis of Google’s data shows that massive chaos appears even with only 12 qubits, the whole quantum advantage question looks to me to be resolving in favor of your conjectures instead.
    Relatedly, maybe your conjectures would not forbid quantum advantage in certain very narrow senses, but in the physical world the quantum chaos you demonstrated dooms practical applications of QC regardless?

    Best regards,

    Quentin

  6. Pingback: The Argument Against Quantum Computers – A Very Short Introduction | Combinatorics and more

  7. Pingback: Greatest Hits 2015-2022, Part II | Combinatorics and more

  8. Pingback: My Notices AMS Paper on Quantum Computers – Eight Years Later, a Lecture by Dorit Aharonov, and a Toast to Michael Ben-Or | Combinatorics and more

Leave a reply to Gil Kalai Cancel reply