The Quantum Debate is Over! (and other Updates)

Quid est noster computationis mundus?

Nine months after is started, (much longer than expected,) and after eight posts on GLL, (much more than planned,)  and almost a thousand comments of overall good quality,   from quite a few participants, my scientific debate with Aram Harrow regarding quantum fault tolerance is essentially over. Some good food for thought, I hope. As always, there is more to be said on the matter itself, on the debate, and on other”meta” matters, but it is also useful to put it now in the background for a while, to continue to think about quantum fault tolerance in the slow pace and solitude, as I am used to, and also to move on in other fronts, which were perhaps neglected a little.

Here are the links to the eight posts: My initial post “Perpetual Motion of The 21st Century?” was followed by three posts by Aram. The first “Flying Machines of the 21st Century?” the second “Nature does not conspire” and the third “The Quantum super-PAC.” We had then two additional posts “Quantum refutations and reproofs” and “Can you hear the shape of a quantum computer?.” Finally we had  two conclusion posts: “Quantum repetition” and “Quantum supremacy or quantum control?

EDP 22-27

We had six new posts on the Erdos Discrepancy Problem over Gowers’s blog (Here is the link to the last one EDP27). Tim contributed a large number of comments and it was interesting to follow his line of thought.  Other participants also contributed a few comments. One nice surprise for me was that the behavior of the hereditary discrepancy for homogeneous arithmetic progression in {1,2,…,n} was  found by Alexander Nikolov and  Kunal Talwar. See this post From discrepancy to privacy and back and the paper. Noga Alon and I showed that it is {\tilde{\Omega}(\sqrt{\log n})} and at most {n^{O(\frac{1}{\log\log n})}}, and to my surprise Alexander and Kunal showed that the upper bound is the correct behavior. The argument relies on connection with differential privacy.

Möbius randomness and computation

After the AC^0-prime number theorem was proved by Ben Green, and the Mobius randomness of all Walsh functions and monotone Boolean function was proved by Jean Bourgain, (See this MO question for details) the next logical step are low degree polynomials over Z/2Z . (The Walsh functions are degree 1 polynomials.) The simplest case offered to me by Bourgain is the case of the Rudin-Shapiro sequence. (But for an ACC(2) result via Razborov-Smolensky theorem we will need to go all the way to polynomial of degree polylog.) I asked it over MathOverflaw. After three months of no activity I offered a bounty of 300 of my own MO-reputations. Subsequently, Terry Tao and Ben Green discussed some avenues and eventually Tao solved the problem (and earned the 300 reputation points). Here is a very recent post on Möbius randomness on Terry Tao’s blog.

Influences on large sets

In my post Nati’s Influence I mentioned two old conjectures (Conjecture 1 dues to Benny Chor and Conjecture 2) about influence of large sets on Boolean functions. During Jeff Kahn’s visit to Israel we managed to disprove them both. The disproof is inspired by an old construction of Ajtai and Linial.

Tel Aviv, New Haven, Jerusalem

Last year we lived for a year in Tel Aviv which was a wonderful experience: especially walking on the beach every day and feeling the different atmosphere of the city. It is so different from my Jerusalem and still the people speak fluent Hebrew. I am now in New Haven. It is getting cold and the fall colors are getting beautiful. And it also feels at home after all these years. And next week I return to my difficult and beautiful  (and beloved) Jerusalem.

The Quantum Fault-Tolerance Debate Updates

In a couple of days, we will resume the debate between Aram Harrow and me regarding the possibility of universal quantum computers and quantum fault tolerance. The debate takes place over GLL (Godel’s Lost Letter and P=NP) blog.

The Debate

Where were we?

My initial post “Perpetual Motion of The 21st Century?” presented my conjectures regarding how noisy quantum computers and noisy quantum evolutions really behave.

Aram’s first post was entitled “Flying Machines of the 21st Century?” It mainly dealt with the question “How is it possible that quantum fault-tolerance is impossible (or really really hard) while classical fault tolerance is possible (and quite easy). Aram claimed that my conjectures, if true, exclude also classical computers.

Aram’s second post entitled “Nature does not conspire” dealt mainly with correlated errors. Aram claimed that it is unreasonable to assume strong correlation of errors as my conjectures imply and that the conjectured relation between the signal and noise is in tension with linearity of quantum mechanics.

Aram’s third  post “The Quantum super-PAC”  raised two interesting thought-experiments and discussed also intermediate models.

Each post ended with a small rejoinder, and included a short description of the ealier discussion.  The discussion was quite extensive and very interesting.

What’s next

Aram and Steve Flammia wrote an interesting manuscript with appealing counterexamples to my Conjecture C. Our next planned post (it now has appeared) will discuss this matter.

Next, I will conclude with a post discussing Aram’s two main points from his first and second posts and some related issues which I find important.

These posts are mostly written but since Aram was busy with pressing deadlines we waited several weeks before posting them. I also enjoyed the break, as the extensive discussion period was quite tiring.

A very short introduction to my position/approach

1) The crux of matter is noise

Continue reading

A Discussion and a Debate

Heavier than air flight of the 21 century?

The very first post on this blog entitled “Combinatorics, Mathematics, Academics, Polemics, …” asked the question “Are mathematical debates possible?” We also had posts devoted to debates and to controversies.

A few days ago, the first post in a discussion between Aram Harrow, a brilliant computer scientists and quantum information researcher (and a decorated debator), and myself on quantum error correction was launched in Dick Lipton and Ken Regan’s big-league blog, Gödel’s Lost Letter and P=NP.

The central question we would like to discuss is:

Are universal quantum computers based on quantum error correction possible.

In preparation for the public posts, Ken, Aram, Dick, and me are having very fruitful and interesting email discussion on this and related matters, and also the post itself have already led to very interesting comments. Ken is doing marvels in editing what we write.

Dick opened the post by talking on perpetual motion machines which is ingenious because it challenges both sides of the discussion. Perpetual motion turned out to be impossible: will quantum computers enjoy the same fate? On the other hand (and closer to the issue at hand), an argument against quantum mechanics based on the impossibility of perpetual motion by no other than Einstein turned out to be false, are skeptical ideas to quantum computers just as bogus? (The answer could be yes to both questions.) Some people claimed that heavier-than-air flight might is a better analogy. Sure, perhaps it is better.

But, of course, analogies with many human endeavors can be made, and for these endeavors, some went one way, and some went the other way, and for some we don’t know.

Although this event is declared as a debate, I would like to think about it as a discussion. In the time scale of such a discussion what we can hope for is to better understand each other positions, and, not less important, to better understand our own positions.  (Maybe I will comment here about some meta aspects of this developing discussion/debate.)

A real debate

A real emerging debate is if we (scientists) should boycott Elsevier. I tend to be against such an action, and especially against including refereeing papers for journals published by Elsevier as part of the boycott. I liked, generally speaking,  Gowers’s critical post on Elsevier, but the winds of war and associated rhetoric are not to my liking.  The universities are quite powerful, and they deal, negotiate and struggle with scientific publishers, and other similar bodies, on a regular  basis. I tend to think that the community of scientists should not be part of such struggles and that such involvement will harm the community and science. This is a real debate! But it looks almost over.  Many scientists joined the boycott and not many opposing opinions were made. It looks that we will have a little war and see some action. Exciting, as ever.

Aaronson and Arkhipov’s Result on Hierarchy Collapse

hc

Scott Aaronson gave a thought-provoking lecture in our Theory seminar three weeks ago.  (Actually, this was eleven months ago.) The slides are here . The lecture discussed two results regarding the computational power of quantum computers. One result from this paper gives an oracle-evidence that there are problems in BQP outside the polynomial hierarchy.  The method is based on “magnification” of results on bounded depth circuits. (It is related to the Linial-Nisan conjecture.)

The second result that we are going to discuss in this post (along with some of my provoked thoughts) is a recent result of Scott Aaronson and Alex Arkhipov which asserts that if  the power of quantum computers can be simulated by digital computers  then the polynomial hierarchy collapses.  More precisely, their result asserts that if sampling  probability distribution created by quantum computers (this is abbreviated as QSAMPLE) is in P then the polynomial hieararchy collapses to its third level.

The beautiful and important paper of Aaronson and Arkhipov is now out. Its main point of view is related to what I describe in the sixth section about “photon machines”. Update: Let me mention related idependent results by Michael J. Bremner, Richard Jozsa, Dan J. Shepherd in the paper entitled: “Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy“.

Here is the plan for this post

1) The polynomial hierarchy and results about hierarchy collapse

2) The Aaronson Arkhipov (AA) theorem and its proof

3) Two problems posed by AA

Those are: does P=BQP already leads to a collapse of the polynomial hierarchy? And does APPROXIMATE-QSAMPLE already leads to a collapse?

4) Does fault tolerance allow QSAMPLE (on the nose)? (Answer: yes)

5) Is there a quantum analog to Aaronson and Arkhipov’s result and what is the computational power of quantum computers?

6) Three Two competing scenarios

7) Aaronson and Arkhipov photon machines and the complexity class BOSONSAMPLING.

Continue reading

When Noise Accumulates

I wrote a short paper entitled “when noise accumulates” that contains the main conceptual points (described rather formally) of my work regarding noisy quantum computers.  Here is the paper. (Update: Here is a new version, Dec 2010.) The new exciting innovation in computer science conference in  Beijing seemed tailor made for this kind of work, but the paper did not make it that far. Let me quote the first few paragraphs. As always, remarks are welcome!

From the introduction: Quantum computers were offered by Feynman and others and formally described by Deutsch, who also suggested that they can outperform classical computers. The idea was that since computations in quantum physics require an exponential number of steps on digital computers, computers based on quantum physics may outperform classical computers. A spectacular support for this idea came with Shor’s theorem that asserts that factoring is in BQP (the complexity class described by quantum computers).

The feasibility of computationally superior quantum computers is one of the most fascinating and clear-cut scientific problems of our time. The main concern regarding quantum-computer feasibility is that quantum systems are inherently noisy. (This concern was put forward in the mid-90s by Landauer, Unruh, and others.) 

The theory of quantum error correction and fault-tolerant quantum computation (FTQC) and, in particular, the threshold theorem which asserts that under certain conditions FTQC is possible, provides strong support for the possibility of building quantum computers. 

However, as far as we know, quantum error correction and quantum fault tolerance (and the highly entangled quantum states that enable them) are not experienced in natural quantum processes. It is therefore not clear if computationally superior quantum computation is necessary to describe natural quantum processes.

We will try to address two closely related questions. The first is, what are the properties of quantum processes that do not exhibit quantum fault tolerance and how to formally model such processes. The second is, what kind of noise models cause quantum error correction and FTQC to fail.

A main point we would like to make is that it is possible that there is  a systematic relation between the noise and the intended state of a quantum computer. Such a systematic relation does not violate linearity of quantum mechanics, and it is expected to occur in processes that do not exhibit fault tolerance.

Let me give an example: suppose that we want to simulate on a noisy quantum computer a certain bosonic state. The standard view of noisy quantum computers asserts that under certain conditions this can be done up to some error that is described by the computational basis. In contrast, the type of noise we expect amounts to having a mixed state between the intended bosonic state and other bosonic states (that represent the noise).

Criticism: A criticism expressed by several readers of an early version of this paper is that no attempt is made to motivate the conjectures from a physical point of view and that the suggestions seem “unphysical.” What can justify the assumption that a given error lasts for a constant fraction of the entire length of the process? If a noisy quantum computer at a highly entangled state has correlated noise between faraway qubits as  we suggest, wouldn’t it allow signaling faster than the speed of light?

I had sort of a mixed reaction toward this criticism. On the one hand I think that it is important and may be fruitful  to examine various models of noise while putting the physics aside. Nevertheless, I added a  brief discussion of  some physical aspects. Here it is: 

Continue reading

Detrimental Noise

 “Imagine there’s no heaven, it’s easy(?) if you try,”   

John Lennon 

 

Disclaimer: It is a reasonable belief  (look here, and here), and an extremely reasonable working assumption (look  here) that computationally superior quantum computers can be built.   

(This post and the draft will be freely updated) I am struggling to meet the deadline of a week ago for a chapter regarding adversarial noise models for quantum error correction. (Update Nov 20: here is the draft; comments are welcomed. Update April 23: Here is the arxived paper, comments are welcome! ) My hypothetical model is called “detrimental.” (This is a reason substantial math postings are a bit slow recently; but I hope a few will come soon.) This project is quite central to my research in the last three years, and it often feels like running, over my head, after my own tail that might not be there. So this effort may well be a CDM (“career damaging move”) but I like it nevertheless. It is related to various exciting conceptual and foundational issues. 

I do have occasionally a sense of progress, (often followed by a backtrack) and for this chapter, rather than describing detrimental noise by various (counterintuitive) properties as I always did, I think I have an honest definition of detrimental noise. Let me tell you about it. (Here is a recent useful guide: a zoo of quantum algorithms, produced by Stephen Jordan.)

Detrimental noise

Consider a quantum memory with n qubits at a state \rho_0. Suppose that \rho_0 is a tensor product state. The noise affecting the memory in a short time interval can be described by a quantum operation E_0. Lets suppose that E_0 acts independently on different qubits and, for qubit i with some small probability p_iE_0 changes it state to the maximum entropy state \tau_i.

This is a very simple form of noise that can be regarded as basic to understanding the standard models of noise as well as of detrimental noise.

In the standard model of noise, E_0 describes the noise of the quantum memory regardless of the state \rho stored in the memory. This is a quite natural and indeed expected form of noise.

A detrimental noise will correspond to a scenario in which, when the quantum memory is at a state \rho and \rho= U \rho_0, the noise E will be U E_0 U^{-1}. Such noise is the effect of first applying E_0 to \rho_0 and then applying U to the outcome noiselessly.

Of course, in reality we cannot perform U instantly and noiselessly and the most we can hope for is that \rho will be the result of a process. The conjecture is that a noisy process leading to \rho will be subject to noise of the form we have just described. A weaker weaker conjecture is that detrimental noise is present in every natural noisy quantum process. I also conjecture that damaging effects of the detrimental noise cannot be canceled or healed by other components of the overall noise.When we model a noisy quantum system either by a the qubits/gates description or in other ways we make a distinction between “fresh” errors which are introduced in a single computer cycle (or infinitesimally when the evolution is described by a continuous model) and the cumulative errors along the process. The basic insight of fault tolerant quantum computing is that if the incremental errors are standard and sufficiently small then we can make sure that the cumulated errors are as well. The conjecture applies to fresh errors.

 (Updated: Nov 19; sorry guys, the blue part is over-simplified and incorrect; But an emergency quantifier replacement seemed to have helped; it seems ok now)  The definition of detrimental noise for general quantum systems that we propose is as follows:

A detrimental noise of a quantum system at a state \rho commutes with every some non-identity quantum operation which stabilizes \rho.

Note that this description,

Just like for the standard model of noise, we do not specify a single noise operation but rather gives an envelope for a family of noise operations.

In the standard model of noise the envelope \cal D_{\rho} of noise operations when the computer is at state \rho does not depend on \rho. For detrimental noise there is a systematic relation between the envelope of noise operations {\cal D}_\rho and the state \rho of the computer. Namely,

 {\cal D}_{U\rho} = U {\cal D}_\rho U^{-1}.

Why is it detrimental?

Detrimental noise leads to highly correlated errors when the state of the quantum memory is highly entangled. This is quite bad for quantum error-correction, but an even more devastating property of detrimental noise is that the notion of “expected number of qubit errors” becomes sharply different from the rate of noise as measured by fidelity or trace distance. Since conjugation by a unitary operator preserves fidelity-metric, the expected number of qubit errors increases linearly with the number of qubits for highly entangled states.

Here is another little thing from my paper that I’d like to try on you:

A riddle: Can noise remember the future?

Suppose we plan a process and carry it out up to a small amount of errors. Can there be a systematic relation between the errors at some time and the planned process at a later time? Continue reading