Quantum computing: achievable reality or unrealistic dream

QC-michel-view QC-gilview

Michel Dyakonov’s View on QC                                     My view (based on Michel’s drawing*)

Update:

my_qc_world

Alexander Vlasov’s view (based on Michel and Konstantin’s drawing)

It has been a while since I devoted a post to quantum computation. Meanwhile, we had a cozy, almost private, easy-going, and very interesting discussion thread on my previous, March 2014 post (that featured my Simons Institute videotaped lectures (I,II).)

What can we learn from a failure of quantum computers?

Last week we had a workshop on “Quantum computing: achievable reality or unrealistic dream.” This was a joint venture of the  American Physics Society and the Racah Institute of Physics here at HUJI, organized by Professor Miron Ya. Amusia, and it featured me and Nadav Katz as the main speakers. Here are the slides of my lecture: What can we learn from a failure of quantum computers.

pred1

Noise Sensitivity and BosonSampling

Earlier, I gave a lecture in our CS colloquium about a recent work with Guy Kindler on noise sensitivity of BosonSampling. We show that for a constant level of noise, noisy BosonSampling can be approximated by bounded-depth computation, and that the correlation between the noisy outcome and the noiseless outcome tends to zero if the noise level is ω(1/n) where n is the number of bosons.  Here is the paper Gaussian noise sensitivity and BosonSampling, the videotaped lecture  Complexity and sensitivity of noisy BosonSampling, and the slides of the lecture.

Contagious error sources would need time travel to prevent quantum computation

On the positive side, Greg Kuperberg and I wrote a paper  Contagious error sources would need time travel to prevent quantum computation  showing that for a large class of correlated noise, (teleportation-based) quantum fault-tolerance works! Greg and I have had a decade-long email discussion (over 2000 emails) regarding quantum computers, and this work grew from our 2009 discussion (about my “smoothed Lindblad evolution” model), and heavily relies on  ideas of Manny Knill.

Nadav Katz: Quantum information science – the state of the art

Some years ago, two brilliant experimentalists, Hagai Eisenberg and Nadav Katz,  joined  the already strong, mainly theoretical, quantum information group here at HUJI.  Nadav Katz gave the second lecture in the workshop, and here are the slides of Nadav’s  lecture: Quantum information science – the state of the art.

nadavtalk

Experimental progress toward stable encoded qubits

Also very much on the positive side, Nadav mentioned a remarkable recent progress by the Martini’s group showing certain encoded states based on 9 physical qubits which are order-of-magnitude (factor 8.4, to be precise,) more stable than the “raw” qubits used for creating them !!

Here is a link to the paper:  State preservation by repetitive error detection in a superconducting quantum circuit, by J. Kelly, R. Barends, A. G. Fowler, A. Megrant, E. Jeffrey, T. C. White, D. Sank, J. Y. Mutus, B. Campbell, Yu Chen, Z. Chen, B. Chiaro, A. Dunsworth, I.-C. Hoi, C. Neill, P. J. J. O’Malley, C. Quintana, P. Roushan, A. Vainsencher, J. Wenner, A. N. Cleland, and John M. Martinis.

Update:  Further comments on a Shtetl-optimized post (especially a comment by Graeme Smith,) help to place the new achievement of the Martinis group within the seven smilestones toward quantum computers from a 2012 Science paper by Schoelkopf and Devoret, originated by David DiVincenzo’s 2000 paper “The physical implementation of quantum computation“. (You can watch these milestone here also .)

The new achievement of having a very robust realization of certain encoded states can be seen as achieving the 3.5 milestone.   The difference between the 3.5th milestone and the 4th milestone plays a central role in the seventh post of my 2012-debate with Aram Harrow in connection with a conjecture I made in the first post (“Conjecture 1″). Aram made the point that classical error-correction can lead to very stable encoded qubits in certain states (which is essentially the 3.5 milestone). I gave a formal description of the conjecture, which essentially asserts that the 4th milestone, namely insisting that encoded qubits allows arbitrary superpositions, cannot be reached.  As I said many times (see, for example, the discussion in my 2012 Simons Institute videotaped lecture 2), a convincing demonstration of the 4th milestone, namely  implementation of quantum error-correction with encoded qubits which are substantially more stable than the raw qubits (and allow arbitrary superposition for the encoded qubit) will disprove my conjectures. Such stable encoded qubits are  expected from implementations of distance-5 surface code. So we are 0.5 milestones away :)

I will be impressed to see even a realization of distance-3 (or distance-5) surface code that will give good quality encoded qubits, even if the encoded qubits will have a quality which is somewhat worse than that of the raw qubits used for the encoding. These experiments, including those that were already carried out, also give various other opportunities to test my conjectures.

Peter Shor’s challenge #1 and my predictions from the failure of quantum computation

My lecture on predictions from the failure of QC is based on two lengthy recent comments (first, second) regarding predictions from the failure of quantum computers. On April 2014, Peter Shor challenged me with the following: Continue reading

Answer to Test Your Intuition (22)

bundling-res

 

Indeed, most people got it right! Bundling sometimes increases revenues, sometimes keeps revenues the same, and sometimes decreases revenues. In fact, this is an interesting issue which was the subject of recent research effort. So here are a few examples as told in the introduction to a recent paper Approximate Revenue Maximization with Multiple Items by Sergiu Hart and Noam Nisan:

Example 1: Consider the distribution taking values 1 and 2, each with probability 1/2.
Let us first look at selling a single item optimally: the seller can either choose to price it
at 1, selling always and getting a revenue of 1, or choose to price the item at 2, selling it
with probability 1/2, still obtaining an expected revenue of 1, and so the optimal revenue
for a single item is 1. Now consider the following mechanism for selling both items:
bundle them together, and sell the bundle for price 3. The probability that the sum of
the buyer’s values for the two items is at least 3 is 3/4, and so the revenue is 3·3/4 = 2.25
– larger than 2, which is obtained by selling them separately.

Example 2:  For the distribution taking values 0 and 1, each with probability 1/2,
selling the bundle can yield at most a revenue of 3/4, and this is less than twice the
single-item revenue of 1/2.

Example 3 (and 4): In some other cases neither selling separately nor bundling
is optimal. For the distribution that takes values 0, 1 and 2, each with probability 1/3,
the unique optimal auction turns out to offer to the buyer the choice between any single
item at price 2, and the bundle of both items at a “discount” price of 3. This auction
gets revenue of 13/9 revenue, which is larger than the revenue of 4/3 obtained from
either selling the two items separately, or from selling them as a single bundle.  (A similar situation happens for the uniform distribution on [0, 1], for which neither bundling nor selling separately is optimal (Alejandro M. Manelli and Daniel R. Vincent [2006]).

Example 5: In yet other cases the optimal mechanism is not even deterministic and must offer lotteries for the items. This happens in the following example from a 2011 paper “Revenue Maximization in Two Dimensions” by Sergiu Hart and Phil Reny: Let F be the distribution which takes values 1, 2 and 4, with probabilities 1/6, 1/2, 1/3, respectively. It turns out that the unique optimal mechanism offers the buyer the choice between buying any one good with probability 1/2 for a price of 1, and buying the bundle of both goods (surely) for a price of 4; any deterministic mechanism has a strictly lower revenue. See also Hart’s presentation “Two (!) good to be true” Update: See also this paper by Hart and Nisan:  How Good Are Simple Mechanisms for Selling Multiple Goods?

Update: See also Andy Yao’s recent paper An n-to-1 Bidder Reduction for Multi-item Auctions and its Applications. The paper is relevant both to the issue of bundling and to the issue of using randomized mechanisms for auctions. (Test your intuition (21).)