Quantum computing: achievable reality or unrealistic dream

QC-michel-view QC-gilview

Michel Dyakonov’s View on QC                                     My view (based on Michel’s drawing*)

Update:

my_qc_world

Alexander Vlasov’s view (based on Michel and Konstantin’s drawing)

It has been a while since I devoted a post to quantum computation. Meanwhile, we had a cozy, almost private, easy-going, and very interesting discussion thread on my previous, March 2014 post (that featured my Simons Institute videotaped lectures (I,II).)

What can we learn from a failure of quantum computers?

Last week we had a workshop on “Quantum computing: achievable reality or unrealistic dream.” This was a joint venture of the  American Physics Society and the Racah Institute of Physics here at HUJI, organized by Professor Miron Ya. Amusia, and it featured me and Nadav Katz as the main speakers. Here are the slides of my lecture: What can we learn from a failure of quantum computers.

pred1

Noise Sensitivity and BosonSampling

Earlier, I gave a lecture in our CS colloquium about a recent work with Guy Kindler on noise sensitivity of BosonSampling. We show that for a constant level of noise, noisy BosonSampling can be approximated by bounded-depth computation, and that the correlation between the noisy outcome and the noiseless outcome tends to zero if the noise level is ω(1/n) where n is the number of bosons.  Here is the paper Gaussian noise sensitivity and BosonSampling, the videotaped lecture  Complexity and sensitivity of noisy BosonSampling, and the slides of the lecture.

Contagious error sources would need time travel to prevent quantum computation

On the positive side, Greg Kuperberg and I wrote a paper  Contagious error sources would need time travel to prevent quantum computation  showing that for a large class of correlated noise, (teleportation-based) quantum fault-tolerance works! Greg and I have had a decade-long email discussion (over 2000 emails) regarding quantum computers, and this work grew from our 2009 discussion (about my “smoothed Lindblad evolution” model), and heavily relies on  ideas of Manny Knill.

Nadav Katz: Quantum information science – the state of the art

Some years ago, two brilliant experimentalists, Hagai Eisenberg and Nadav Katz,  joined  the already strong, mainly theoretical, quantum information group here at HUJI.  Nadav Katz gave the second lecture in the workshop, and here are the slides of Nadav’s  lecture: Quantum information science – the state of the art.

nadavtalk

Experimental progress toward stable encoded qubits

Also very much on the positive side, Nadav mentioned a remarkable recent progress by the Martini’s group showing certain encoded states based on 9 physical qubits which are order-of-magnitude (factor 8.4, to be precise,) more stable than the “raw” qubits used for creating them !!

Here is a link to the paper:  State preservation by repetitive error detection in a superconducting quantum circuit, by J. Kelly, R. Barends, A. G. Fowler, A. Megrant, E. Jeffrey, T. C. White, D. Sank, J. Y. Mutus, B. Campbell, Yu Chen, Z. Chen, B. Chiaro, A. Dunsworth, I.-C. Hoi, C. Neill, P. J. J. O’Malley, C. Quintana, P. Roushan, A. Vainsencher, J. Wenner, A. N. Cleland, and John M. Martinis.

Update:  Further comments on a Shtetl-optimized post (especially a comment by Graeme Smith,) help to place the new achievement of the Martinis group within the seven smilestones toward quantum computers from a 2012 Science paper by Schoelkopf and Devoret, originated by David DiVincenzo’s 2000 paper “The physical implementation of quantum computation“. (You can watch these milestone here also .)

The new achievement of having a very robust realization of certain encoded states can be seen as achieving the 3.5 milestone.   The difference between the 3.5th milestone and the 4th milestone plays a central role in the seventh post of my 2012-debate with Aram Harrow in connection with a conjecture I made in the first post (“Conjecture 1″). Aram made the point that classical error-correction can lead to very stable encoded qubits in certain states (which is essentially the 3.5 milestone). I gave a formal description of the conjecture, which essentially asserts that the 4th milestone, namely insisting that encoded qubits allows arbitrary superpositions, cannot be reached.  As I said many times (see, for example, the discussion in my 2012 Simons Institute videotaped lecture 2), a convincing demonstration of the 4th milestone, namely  implementation of quantum error-correction with encoded qubits which are substantially more stable than the raw qubits (and allow arbitrary superposition for the encoded qubit) will disprove my conjectures. Such stable encoded qubits are  expected from implementations of distance-5 surface code. So we are 0.5 milestones away :)

I will be impressed to see even a realization of distance-3 (or distance-5) surface code that will give good quality encoded qubits, even if the encoded qubits will have a quality which is somewhat worse than that of the raw qubits used for the encoding. These experiments, including those that were already carried out, also give various other opportunities to test my conjectures.

Peter Shor’s challenge #1 and my predictions from the failure of quantum computation

My lecture on predictions from the failure of QC is based on two lengthy recent comments (first, second) regarding predictions from the failure of quantum computers. On April 2014, Peter Shor challenged me with the following: Continue reading

Navier-Stokes Fluid Computers

how-magneto-rheological-suspension-works-8947_2

Smart fluid

Terry Tao posted a very intriguing post on the Navier-Stokes equation, based on a recently uploaded paper Finite time blowup for an averaged three-dimensional Navier-Stokes equation.

The paper proved a remarkable negative answer for the regularity conjecture for a certain variants of the NS equations, namely (or, perhaps, more precisely) the main theorem demonstrates finite time blowup for an averaged Navier-Stokes equation. (This already suffices to show that certain approaches for a positive answer to the real problem are not viable.) The introduction ends with the following words.

“This suggests an ambitious (but not obviously impossible) program (in both senses of
the word) to achieve the same e ffect for the true Navier-Stokes equations, thus obtaining a negative answer to Conjecture 1.1 (the regularity conjecture for 3D NS equation)… Somewhat analogously to how a quantum computer can be constructed from the laws of quantum mechanics [Here Tao links to Benio ff’s 1982 paper: “Quantum mechanical Hamiltonian models of Turing machines,”], or a Turing machine can be constructed from cellular automata such as “Conway’s Game of Life” , one could hope to design logic gates entirely out of ideal fluid (perhaps by using suitably shaped vortex sheets to simulate the various types of physical materials one would use in a mechanical computer). If these gates were sufficiently “Turing complete”, and also “noise-tolerant”, one could then hope to combine enough of these gates together to “program” a von Neumann machine consisting of ideal fluid that, when it runs, behaves qualitatively like the blowup solution used to establish Theorem 1.4.[The paper’s main theorem] Note that such replicators, as well as the related concept of a universal constructor, have been built within cellular automata such as the “Game of Life.”

Once enough logic gates of ideal fluid are constructed, it seems that the main difficulties in executing the above program are of a “software engineering” nature, and would be in principle achievable, even if the details could be extremely complicated in practice. The main mathematical difficulty in executing this “fluid computing” program would thus be to arrive at (and rigorously certify) a design for logical gates of inviscid fluid that has some good noise tolerance properties. In this regard, ideas from quantum computing (which faces a unitarity constraint somewhat analogous to the energy conservation constraint for ideal fluids, albeit with the key di fference of having a linear evolution rather than a nonlinear one) may prove to be useful. (Emphasis mine.)

Interesting idea!

And what Tao does go well beyond an idea, he essentially implement this program for a close relative of the NS equation!  I am not sure if universal computing is established for these systems but the proofs of the finite-time blow up theorem certainly uses some computational-looking gadget, and also as Terry explains some form of fault-tolerance.

Somewhat related ideas (unsupported by any results, of course,) appeared in the seventh post “Quantum repetition” of my debate with Aram Harrow on quantum computing.  (See, e.g., this remark, and this one, and this one.) The thread also contains interesting links, e.g. to Andy Yao’s paper “Classical physics and the Curch-Turing Thesis.”  In addition to the interesting question:

Does the NS-equation in three-dimension supports universal (classical) computation,

we can also ask what about two-dimensions?

Can NS-equations in two dimension be approximated in any scale by bounded depth circuits?

One general question suggested there was the following: “It can be of interest (and perhaps harder compared to the quantum case) to try to describe classical evolutions that do not enable/hide fault tolerance and (long) computation.”

Another interesting comment by Arie Israel is: “I was surprised to learn that experimental fluid mechanics people had thought of this analogy before. Apparently the key name is ‘Fluidics’ and those ideas date back at least to the sixties.”

cih2

Update: Here is the first paragraph from a nice article by  Erica Klarreich entitled A Fluid New Path in Grand Math Challenge on this development in Quanta Magazine:

In Dr. Seuss’s book “The Cat in the Hat Comes Back,” the Cat makes a stain he can’t clean up, so he calls upon the help of Little Cat A, a smaller, perfect replica of the Cat who has been hiding under the Cat’s hat. Little Cat A then calls forth Little Cat B, an even smaller replica hidden under Little Cat A’s hat. Each cat in turn lifts his hat to reveal a smaller cat who possesses all the energy and good cheer of the original Cat, just crammed into a tinier package. Finally, Little Cat Z, who is too small to see, unleashes a VOOM like a giant explosion of energy, and the stain disappears.

And here is a follow up post on Tao’s blog (and a few more II, III), and a post on Shtetl Optimized.

The flip side

Update (June 14): It is worth noting that while the purpose of Tao’s program is to show finite-time blow up of the 3D Navier Stokes equations (as is often the case) these lines of ideas can potentially be useful also toward a positive solution of the regularity conjectures. Specifically, one can try to show that 3D Navier-Stokes equations do not support universal classical computation and even more specifically do not support classical fault-tolerance and error correction. Also here some analogy with quantum computation can be useful: It is expected that for adiabatic processes computation requires “spectral gap” and that gapped evolutions with local Hamiltonians support only bounded depth computation. Something analogous may apply to NS equations in bounded dimensions.

There are many caveats, of course,  the quantum results are not proved for D>1, NS equations are non-linear which weakens the analogy, and showing that the evolution does not support computation does not imply, as far as we know, regularity.

Three more remarks: 1) On the technical level an important relevant technical tool for the results on gapped systems with local Hamiltonians is the Lieb-Robinson inequality. (See, e.g. this review paper.)  2) for classical evolutions a repetition mechanism (or the “majority function”) seems crucial for robust computation and it will be interesting specifically to test of 3D Navier-stokes support it; 3) If computation is not possible beyond bounded depth this fact may lead to additional conserved quantities for NS, beyond the classical ones. (One more, June 28): It looks to me that the crucial question is if NS equations only support bounded computation or not. So this distinction captures places where circuit complexity gives clear mathematical distinctions.

Polymath 8 – a Success!

mat2

Yitang Zhang

Update (July 22, ’14). The polymath8b paper “Variants of the Selberg sieve, and bounded intervals containing many primes“, is now on the arXiv. See also this post on Terry Tao’s blog. Since the last update, we also had here at HUJI a beautiful learning seminar on small gaps between primes. James Maynard gave a series of three lectures and additional lectures were given by Zeev Rudnick and Tamar Ziegler.

 

Update (Jan 9, ’14, corrected Jan 10):  Polymath8b have just led to an impressive progress: Goldston, Pintz, and Yıldırım showed that conditioned on the  Elliott-Halberstam conjecture (EHC) there are infinitely many primes of bounded gap below 16. Maynard improved it to 12. Polymath8b have just improved it based on a generalized form of the EHC (proposed in 1986 by Bombieri, Friedlander, and  Iwaniec) further to 8.  [Further update:  6 and there are reasons so suspect that further improvement requires major breakthrough – namely getting over the “parity problem”.] The unconditional bound for gaps stands now on 270.

Update: A paper by James Maynard entitled “Small gaps between primes” proved that for  every k there are infinitely many intervals of length f(k) each containing at least k primes. He also reduced the gap between infinitely many pairs of primes to 600. The method is also (said to be) much simpler. Amazing! Similar results were obtained independently by Terry Tao.

Terry Tao launched a followup polymath8b to  improve the bounds for gaps between primes based on Maynard’s results.

Update: Here is the paper: A NEW BOUND FOR GAPS BETWEEN PRIMES by D. H. J. Polymath.

Zhang’s breakthrough and Polymath8

The main objectives of the polymath8 project, initiated by Terry Tao back in June, were “to understand the recent breakthrough paper of Yitang Zhang establishing an infinite number of prime gaps bounded by a fixed constant {H}, and then to lower that value of {H} as much as possible.”

Polymath8 was a remarkable success! Within two months the best value of H that was 70,000,000 in Zhang’s proof was reduced to 5,414. Moreover, the polymath setting looked advantageous for this project, compared to traditional ways of doing mathematics.

The polymath project gave opportunity to a number of researchers to understand Zhang’s proof and the earlier breakthrough by Daniel Goldston, János Pintz, and Cem Yıldırım. It also gave an opportunity to a larger number of mathematicians to get some feeling about the involved mathematics.

The story

Twin primes are two primes p and p+2. The ancient twin prime conjecture asserts that there are infinitely many twin primes. The prime number theorem asserts that there are (asymptotically)  n/log n primes whose value is smaller than a positive integer n, and this implies that we can find arbitrary large pairs of consecutive primes  p and q such that q-p is at most (log p). Until a few years ago nothing asymptotically better was known. Goldston, Pintz, and Yıldırım (GPY), showed in 2005 that there infinitely many pairs of primes p and q such that q-p is O(\sqrt {\log n}). A crucial idea was to derive information on gaps of primes from the distribution of primes in arithmetic progressions. GPY showed that conditioned on the  Elliott-Halberstam conjecture (EHC) there are infinitely many primes of bounded gaps (going all the way to 16, depending on a certain parameter in the conjecture, but not to 2). Yitang Zhang did not prove the EHC but based on further understanding of the situation found a way to shortcut the conjecture and to prove that there are infinitely many primes of with bounded gaps unconditionally!

Here is a very nice 2007 survey article by Kannan Soundararajan on this  general area of research and the GPY breakthrough. (One thing I recently learned is that  Soundararajan is called by friends and colleagues “Sound”. ) This article starts with a very thoughtful and endearing answer to the quastion: “Why do we care at all? After all primes were meant to be multiplied, not subtracted (or added).”

Here is a short list of thoughts (things I learned, things I wish to understand better…) from following (from distance) Polymath8 and related Internet activity.

1) How information on primes in arithmetic progressions leads to information on gaps between primes?

I do not really understand why the information on primes in arithmetic progressions e.g. the Elliott-Halberstam conjecture lead to the conclusion regarding primes with bounded gaps. I would be very happy to get a feeling for it.

2) The three-primes barrier.

Already GPY  tried to extend their methods to show the existence of three primes in a bounded interval of integers. So far, it is not known how to show that intervals of the form [n,n+o(log n)] contain triples of primes infinitely often. Perhaps, to actually solve the twin prime conjecture we will need to get a breakthrough for triples of primes, but maybe not. See also this MO question asked by Noam Elkies.

Update: Here is another interesting MO question Quantitative lower bounds related to Zhang’s theorem on bounded gaps, asked by Eric Naslund. Eric asks: what can be say based on Zhang’s work about the smallest value  of a pair of primes of distance k apart?

3) Cauchy-Schwarz everywhere;

This may sound silly but the way Cauchy-Schwarz (C-S) inequality is used again and again make you wonder again why C-S is it so useful, and why it is mainly C-S which is so useful.

4) Can detailed statistical understanding of primes in sets other than AP  be useful?

In recent years there was much activity (and I also was interested) in Mobius randomness and analogs of the prime number theorem for various more complicated subsets of integers. (E.g., subsets defined by various properties of the digital expansion.) Can understanding of this kind  also be used for the prime-gaps questions?

5) Usefulness of Deligne’s work on Riemann’s hypothesis for functions fields for questions in analytic number theory.

I knew, of course that Deligne famously proved analogs of the Riemann hypothesis for function fields in great generality but I was not aware that these results have applications to “ordinary” analytic number theory. Again, this is something I would be happy to know a little more about. There is a nice recent post on the Riemann hypothesis in various settings on “What’s new”.

6) Parity problem.  (Added Nov 27) There is a difficult “parity problem” which seems to be a difficult obstacle for getting the gap to two. (And for various related goals). Terry Tao wrote about it in 2007 in this post. In polymath8b VII an attempt to cross the “parity barrier” was  made but (as people expected) it turned out that the parity barrier indeed shows up causes this attempt to fail. (Update July 14:) This is further explained in this new post over Tao’s blog.

7) (Added Nov 27) One thing I am curious about is the following. Consider a random subset of primes (taking every prime with probability p, independently, and say p=1/2). Now consider only integers involving these primes. I think that it is known that this system of “integers” satisfies (almost surely) PNT but not at all RH. We can consider the properties BV (Bombieri Vinogradov), or more generally EH(θ) and the quantities H_m. For such systems does BV typically hold? or it is rare like RH. Is Meynard’s implication applies in this generality? Nicely here we can hope even for infinite consecutive primes. Update: after thinking about it further and a little discussion over polymath8b it looks that current sieve methods, and some of the involved statements, rely very strongly on both the multiplicative and additive structure of the integers and do not allow extensions to other systems of “integers.”

 

 

Update (August 23): Before moving to small gaps, Sound’s 2007 survey briefly describes the situation for large gaps. The Cramer probabilistic heuristic suggests that there are consecutive primes in [1,n] which are c(\log n)^2 apart, but not C (\log n)^2 apart where c and C are some small and large positive constants.  It follows from the prime number theorem that there is a gap of at least \log n. And there were a few improvements in the 30s ending with a remarkable result by Rankin who showed that there is a gap as large as \log n times \log \log n \log \log \log \log n (log log log n)^{-2}. Last week Kevin Ford, Ben Green, Sergei Konyagin, and Terry Tao and independently James Maynard were able to  improve Rankin’s estimate by a function that goes to infinity with n.  See this post on “What’s new.”

 

Combinatorics, Mathematics, Academics, Polemics, …

1. About:

My name is Gil Kalai and I am a mathematician working mainly in the field of Combinatorics.  Within combinatorics, I work mainly on geometric combinatorics and the study of convex polytopes and related objects, and on the analysis of Boolean functions and related matters. I am a professor at the Institute of Mathematics at the Hebrew University of Jerusalem and also have a  long-term visiting position at the departments of Computer Science and Mathematics at Yale University, New Haven.  

 

 

 

 

 

 

 

 Gosset polytope– a hand drawing by Peter McMullen of the plane projection of the 8-dimensional 4-simplicial 4-simple Gosset polytope. Continue reading