Science magazine has an article written by Adrian Cho Ordinary computers can beat Google’s quantum computer after all. It is about the remarkable progress in classical simulations of sampling task like those sampling tasks that led to the 2019 Google’s announcement of “quantum supremacy”. I reported about the breakthrough paper of Feng Pan and Pan Zhang in this post and there are updates in the post itself and the comment section about a variety of related works by several groups of researchers.
The very short summary is that by now classical algorithms are ten orders of magnitude faster than those used in the Google paper and hence the speed-up is ten orders of magnitude lower than Google’s fantastic claims. (The Google paper claims that their ultimate task that required 300 seconds for the quantum computer will require 10,000 years on a powerful supercomputers. with the new algorithms the task can be done in a matter of seconds.)
Also regarding the Google supremacy paper, my paper with Yosi Rinott and Tomer Shoham, Statistical Aspects of the Quantum Supremacy Demonstration, just appeared in “Statistical Science”. (Click on the link for the journal version.) The Google 2019 paper and, more generally, NISQ experiments raise various interesting statistical issues. (In addition, it is important to double check various statistical claims of the paper.) One of our findings is that there is a large gap between the empirical distribution and the Google noise model. I hope to devote some future post to our paper and to some further research we were doing.
The leaking of the Google paper in September 23, 2019 led to huge media and academic attention and many very enthusiastic reactions. I also wrote here on the blog a few critical posts about the Google claims.
Here is a figure with the price of bitcoin around the time of the Google quantum supremacy (unintended) announcement.
Update: There is a recent post on Shtetl Optimized with Scott Aaronson’s take on the supremacy situation. Overall our assessments are not very far apart. I don’t understand the claim: “If the experimentalists care enough, they could easily regain the quantum lead, at least for a couple more years, by (say) repeating random circuit sampling with 72 qubits rather than 53-60, and hopefully circuit depth of 30-40 rather than just 20-25.” In my view the most crucial task is to try to repeat and to improve some aspects of the Google experiment even for 20-40 qubits. (In any case, nothing is going to be easy.)