Correspondence with Daniel and Matthias

(March, 2014) Following the new paper by Vinci, Albash, Mishra, Warburton, and Lidar I had another round of interesting exchange with Daniel and Matthias regarding the old and new papers that we discussed in the Shtetl-optimized thread.

I mentioned the tension between two methodologies: From the point of view of statistical hypothesis testing, the sequence of papers seem (certainly in hindsight) like a paper trying to prove connection between cancer and smoking based on two smokers and one nonsmoker, a reply analyzing another nonsmoker, a reply to the reply with a different analysis of this nonsmoker, another reply to the first paper with yet another nonsmoker and a reply to the second reply with more analysis of this new nonsmoker. With this analogy this line of research does not look reasonable.

However, when we regard the DWave device as an unknown physical phenomena and  use standard scientific methodology (Daniel referred to this as Popperian, but this applies equally well to other approaches in philosophy of science) we simply progress by gradually testing proposed models one by one.

I was curios if the second  approach  in-line with common statistical reasoning in experimental physics.

Here is a brief summary of the responses and subsequent correspondence

The usual procedure for describing an experiment is to come up with the most natural model that may describe the experimental results and then compare to the experimental measurements. If discrepancies are found, then one needs to either modify the model or explain them through deficiencies of the experiment. Performing various experiments one then accumulates evidence that guides one’s understanding of the experimental system.

Applied to the D-Wave device the device was built to implement a quantum transverse field Ising model and therefore this is the most natural model to check against. What the Boixo et al paper has shown is that the behavior of the device is fully consistent with that model.  This provided evidence that the machine performs quantum annealing and that  its behavior is consistent with what they set out to build.

This is, of course, no proof of anything (since experiments can never provide proof) and no statement about any large scale quantum effects.

The Shin et al paper  shows that the behavior of our simulated quantum annealer can be described by a (finely tuned) mean-field version of the model, which contains fully coherent qubits but no entanglement. This result does not invalidate the Boixo et al result, but demonstrates that a quantum annealer for a random Ising spin glass operating at the relatively high temperatures of the D-Wave device might not profit from entanglement in its performance. This adds to result (Ronnow et als) about “no speed up” on the one hand and evidence about 8-qubit entanglements in the other.

I raised again the possibility (which I regard as very plausible) that the type of statistical tests regarding input/output behavior cannot say anything about the question if the D-Wave machine implement a quantum evolution based on transverse field Ising model (or some variant of it) or rather some classical version.

The responses emphasized that this is not a back and forth of various models for a black box, but a hypothesis testing based on what the machine was designed to build. As for the latest paper (Vinci et al.), given that (as shown by Shin et al) a mean field model describes the performance at 108 qubits, one can wonder if this is all the machine does. The new paper (Vinci et al.) shows that the Shin et al. model fails to agree with the new experiments and the machine is more complex than Shin et als’ simple model.

Leave a comment