Tag Archives: Quantum computers

A Few Slides and a Few Comments From My MIT Lecture on Quantum Computers

I gathered a few of the comments made by participants of my lecture “Why quantum computers cannot work and how”, and a few of my answers. Here they are along with some of the lecture’s slides. Here is the link for the full presentation.

1) Getting started

hkmit1

Continue reading

Meeting with Aram Harrow, and my Lecture on Why Quantum Computers Cannot Work.

Last Friday, I gave a lecture at the quantum information seminar at MIT entitled “Why quantum computers cannot work and how.” It was a nice event with lovely participation during the talk, and a continued discussion after it. Many very interesting and useful remarks were made. Here are the slides. (The abstract can be found on this page.)

After having an almost a yearlong debate with Aram Harrow, I finally met Aram in person, and we had nice time together during my visit.

AramGil

Aram is so nice that had it been up to me I would certainly make quantum computers possible :) (But this is not up to us and all we can do is to try to explore if QC are possible or not.)

We talked about quite a few topics starting with various exotic models of noise that treat differently classic and quantum information, the relevance of locally correctable codes and their quantum counterparts, the sum of Squares/Lasserre hierarchy, unique games and hypercontractivity, my smoothed Lindblad evolutions , NMR and spin-echo, quantum annealing and stoquasicity, and works by Mossbüer, Rekha Thomas, and Monique Laurent were mentioned. 

More

I just returned yesterday night from Yale after a very fruitful visit.  Here is a picture of a snowcar decorated with car mirrors from the great blizzard.

winter12 224

Symplectic Geometry, Quantization, and Quantum Noise

Over the last two meetings of our HU quantum computation seminar we heard two talks about symplectic geometry and its relations to quantum mechanics and quantum noise.

Yael Karshon: Manifolds, symplectic manifolds, Newtonian mechanics, quantization, and the non squeezing theorem.

yaelkarshon

The first informal lecture by Yael Karshon gave an introduction to the notions of manifolds and symplectic manifolds. Yael described the connection with Newtonian physics, gave the example of symplectic structure on phase spaces coming from physics, and talked about dynamics and physical laws. She also gave other examples of symplectic manifolds. The 2-dimensional sphere S^2 is an important example, and also complex projective manifolds are symplectic.

Quantization

Now, if symplectic geometry describes Newtonian mechanics how should we move to describe quantum mechanics?  Yael talked a little about quantization, described an impossibility theorem by Groenweld and Van Hove for certain general types of quantization, and mentioned a few ways around it. (More details at the end of this post.)

Non squeezing, symplectic camels and uncertainty.

Yael concluded with Gromov’s non-squeezing theorem. This theorem asserts that a ball of radius R cannot be “squeezed” by a symplectic map to a cylinder whose base is a circle of radius r,  for R > r. Yael mentioned that Gromov’s non squeezing theorem can be regarded as a classical manifestation of the uncertainty principle from quantum mechanics. (The Wikipedia article gives a nice description of the theorem,  mention the metaphor of  passing a symplectic camel through the eye of a needle, and notes the connection with the uncertainty principle.)

Leonid Polterovich: Geometry of Poisson Brackets and Quantum Unsharpness

Polterovich

A week later Leonid Polterovich gave a lecture entitled “Geometry of Poisson brackets and quantum unsharpness.” (Here are the slides Update: here is a video of a related lecture in QStart). Leonid reviewed the notion of symplectic manifolds, and gave a vivid explanation, based on areas of parallelograms, of how to think about a symplectic form.  Then he defined a displaceable set (a notion introduced by Helmut Hofer) in a symplectic manifold. This is a set X such that you can find a Hamiltonian diffeomorphism so that the image of X is disjoint from X. On S^2 small discs are displaceable but the equator is not!

Poisson brackets and the fiber theorem

Another crucial ingredient in the lecture was the notion of Poisson brackets which measure non-commutativity of Hamiltonian flow. Leonid continued to describe the Polterovich-Entov “non displaceable fiber theorem,” which asserts that if you have a function from a compact symplectic manifold to R^n described by n real functions whose pairwise Poisson brackets vanish, then there is a point whose pre-image is not displaceable.

Partition of unity, classical registration, and cell phones.

A partition of unity is a collection of non-negative valued real functions that sum up to 1. Suppose we are given an open cover \cal U of M by n sets and ask if there exists a partition of unity 1=f_1+f_2+\dots f_n such that f_i is supported on U_i and all Poisson brackets {f_i,f_j} vanish.  Next comes a theorem of Entov Polterovich and Zapolsky which asserts that this is not possible for covers with small sets, namely, if all sets are displaceable.  In fact, the theorem gives a quantitative relation between how small the sets in the cover are and a “measure of non-commutativity” for any partition of unity that respects this open cover.  Both the proof and the interpretations of this theorem have strong relations to quantum mechanics. This was the next topic in Leonid’s lecture. Leonid gave us a choice between ideas of QM playing a role in the proof and QM interpretation of the result and connections to quantum noise. In the end he talked about both topics.

Before going quantum, note that if the sets in the cover represent areas covered by an antenna for cell phones, you can ask how to register cellphones to antennas and regard the partition of unity as a probabilistic recipe.

Quantum mechanics,  The correspondence principle

Enters quantum mechanics. Here is a dictionary

LP1

The correspondence principle in physics states that the behavior of systems described by the theory of quantum mechanics reproduces classical physics in the limit.

POVMs and projector valued POVMs (von-Neumann observables).

Leonid talked about two notions of observables. The more general one is based on positive operator valued measures (briefly – POVM’s). The more special one is von-Neumann observables which are special cases of POVM’s (called projector valued POVM’s). POVM’s on the quantum side are analogs of partition of unity on the classic side, and they lead to a sort of “quantum registration.” (This is the familiar probabilistic aspect of quantum measurement.)

Quantum noise, non-commutativity and the unsharpness principle

Following Ozawa, and Busch-Heinonen-Lahti,  Leonid described a notion of “systematic quantum noise” and described a lower bound referred to as an “unsharpness principle” (Janssens 2006, Miyadera-Imai, 2008, and Polterovich 2012)  of the systematic noise in terms of non commutativity of the components of the POVM. Quantum noise, in this context, refers only to measurements, and systematic quantum noise describes the extent to which POVM’s are not von-Neumann observables.

An unsharpness principle for Berezin-Toeplitz quantization

Leonid described next the unsharpness principle based on the Berezin-Toeplitz quantization of a symplectic manifold. In this case we can talk about a POVM which corresponds to an open cover of the manifold by small sets. The conclusion is that these POVMs cannot be too close to  projector-valued POVM’s. The proof relies on the theorem of Entov-Polterovich-Zapolsky, the (difficult) correspondence principles for the quantization, and the linear-algebraic unsharpness principle for POVM’s. (I will give some more details below.)

Links to Leonid’s papers

Here are links to Leonid’s papers on quantum noise:  Quantum unsharpness and symplectic rigidity and Symplectic geometry of quantum noise.

Grete Hermann

Grete_Hermann

Leonid briefly mentioned in his lecture the name of Grete Hermann. She was a German mathematician and philosopher, who is famous for her early work on the foundations of quantum mechanics and  early criticism of  the no-hidden-variable theorem by John von Neumann. Here is an English translation of her 1935 paper: The foundations of quantum mechanics in the philosophy of nature.  At age 26 Hermann finished her Ph. D. in mathematics with Emmy Noether. In the 30s she participated in an underground movement against the Nazis and lived in exile from 1936 until 1946.

Symplecticism and skepticism: possible relations with my work

As some of you may know I am quite interested in quantum noise and quantum fault-tolerance. My work is about properties and models of quantum noise which do not enable quantum fault tolerance believed to be crucial for building quantum computers. (Here are slides from a recent talk.) Ultimately, the hope is to use the theory of quantum error-correction and quantum fault-tolerance to describe principles of quantum noise which do not enable computational superior devices based on quantum physics.

Leonid and I were surprised to discover that we are both interested in quantum noise, and some of the statements we made look linguistically similar. For example, we both talk about a certain “hard-core” noisy kernel that cannot be avoided, and relate it to measures of non-commutativity. Leonid talks about “smearing” while I talk about “smoothing,” etc.

Yet, there are also clear differences. The unsharpness principle talks about noisy measurements and not about noisy approximations to unitary evolutions or to pure states which are what my work is mainly about.

Yet again, there may be ways to reconcile these differences. Ideal quantum computer evolutions induce non-unitary operations on parts of the quantum memory, even when it comes to two qubits. (And I am quite interested in the case of two qubits.) Beside that, quantum fault-tolerance is based fundamentally on non-unitary evolutions. Another interesting question is to see if my “smoothed Linblad evolutions” which are obtained from an arbitrary noisy evolution by adding a certain “smoothing” in time satisfy the unsharpness principle. And another interesting question is to see if physics-models of noisy quantum computers like the ones proposed by Aharonov-Kitaev-Preskill, and Preskill do obey the unsharpness principle.

It will be interesting to explore connections between Leonid’s work and mine. The notion of a quantum computer abstracts away the geometry and much of the physics and this is also the case for models of noisy quantum computers – both the standard ones and mine. The same is true for classical computers – classical computers can be used to describe classical physics but the laws of physics do not emerge from Turing machines or Boolean circuits. In the quantum and classical cases alike, more structure is needed.  So in order to check my conjectures on the behavior of noisy quantum systems we either need to study very specific proposed implementations of quantum computers or other specific noisy quantum systems, or to consider some middle ground that manifests the physics. Symplectic geometry can give a useful such middle ground.

Other relations with quantum information and quantum computation

Yet again once more, it looks like the connection of symplectic geometry, quantization and noise with the theory of quantum information and computation need not be restricted to my little skeptical corner. Useful ideas and examples can flow in both directions and involve mainstream quantum computation. The symplectic angle is certainly a good thing to be aware of and explore (and for me it will probably be a slow process).

A few more details on quantization in symplectic geometry as explained by Yael Karshon.

The challenge is to associate to every symplectic manifold  M a Hilbert space  H  and to every real valued smooth function on  M a self-adjoint operator  on  H such that

  1.  Linear combinations of functions go to linear combinations of operators,
  2.  Poisson brackets of functions go to  i  times  the commutator of operators,
  3.  the function  1  goes to the identity operator,  and
  4.  a “complete” family of functions passes to a “complete” family of operators.

The axioms (1)-(4) are attributed to Dirac. In (4), a family of functions is “complete” if it separates points almost everywhere and a family of operators is “complete” if it acts irreducibly.  (There may be variations in these definitions. Some people require more, e.g., that the powers of a function be mapped to the powers of the corresponding operator.)

“No go theorems” say that there does not exist a correspondence (that is nontrivial in some sense) that satisfies the properties (1)-(4). A “no go theorem” for  M = R^{2n} was given by Groenweld and Van Hove. Later “no go theorems” were also found for some compact symplectic manifolds. The 1996 review paper “Obstruction results in quantization theory”, by Gotay, Grundling, and Tuynman, should be relevant.

The first step in Geometric Quantization, which is called Prequantization, more or less achieves (1), (2), (3) but fails at (4). One compromise is to define the correspondence not for all smooth functions, but, rather, to restrict it to a sub-Poisson-algebra of the smooth functions. With this compromise, Geometric quantization can achieve (1)-(4). (But it depends on a choice of a so-called “polarization”.  Often different choices give isomorphic answers but this is only partially understood.)

Other methods of quantization involve asymptotic in h-bar. This means that the desired correspondence is demonstrated in the limit with respect to some parameter regarded as 1 over the Planck constant.  One of these is the Berezin-Toeplitz quantization that Leonid works with. Another is Deformation Quantization.

Berezin-Toeplitz quantization, smearing,  coherent noise, and quasi-states as further explained by Leonid Polterovich

Berezin-Toeplitz quantization

Berezin-Toeplitz quantization is a procedure to move from a symplectic manifold M to a sequence of Hilbert spaces which satisfy in the limit (when m tends to infinity) the quantization requirements. Leonid did not explain the construction in the lecture but later explained to Dorit Aharonov, Guy Kindler, and me how it works for the special case where    M is a Cartesian product of n copies of S^2. Very roughly you associate to each S^2 an m-dimensional Hilbert space and let m tend to infinity. The proofs of the correspondence principle: namely, the relation between commutators and Poisson brackets when m tends to infinity involve deep and difficult mathematics, with certain shortcuts when the symplectic manifolds have the extra structure of a “Kahler manifold.” While the main interest in the literature is for the case that m tends to infinity, for us, the case m=2 is quite interesting as it relates unitary evolution on an n-qubit quantum computer with Hamiltonian evolution on a Cartesian product of 2-spheres.

Quantum registration, smearing, unsmearing and coherent noise

POVMs can be used to describe a quantum registration rule analogous to the classical probabilistic rule based on partition of unity. Leonid described a statistical smearing operation on POMVs that increases noise. He asked to what extent we can denoise using unsmearing, and proved that, in certain circumstances, a certain amount of “inherent noise” persistent under unsmearing must remain.

Quantum and symplectic quasi-states

The lecture mentioned briefly another interesting aspect of the classical-quantum correspondence through symplectic geometry. Gleason proved in 1957 that for Hilbert spaces of dimension 3 or more every “quasi-state” is a state.  (Quasi states are defined based on linearity requirement just when the observables are commuting.) Non linear symplectic “quasi-states” do exist and this is related to deep developments in symplectic geometry. Leonid’s talk shows that quasi-states provided by symplectic geometry vanish on functions with dispaceable (“small”) supports. This readily rules out existence of Poisson commuting partitions of unity with displaceable supports.)

More

Happy new year, everybody!

A planned “Kazhdan’s seminar” spring 2014.

In 2003/2004 David Kazhdan and I ran a seminar on polytopes, toric varieties, and related combinatorics and algebra. (A lot has happened since and this spring Sam Payne and I will give together a course at Yale on similar topics this spring. There will be a separate post about it.) David and I felt that it is time to run another such event in 2014, perhaps establishing a tradition for a decennial joint seminar. So next spring, the plan is that David with me and Leonid and hopefully also Michael and Dorit will devote one of David’s Sunday seminars to computation, quantumness, symplectic geometry, and information.

A 1965 letter by Richard Palais

In a widely circulated but unpublished letter in 1965, Richard Palais explained the symplectic formulation of Hamiltonian mechanics. See this MO question.

Small and large in mathematics

Different notions of “small” and “large” and the tension between them are quite important in many areas of mathematics. Of course, sets of measure zero, sets of Baire-category one, sets of lower dimensions, subsets of smaller (infinite) cardinality, come to mind. In the symplectic context displaceable sets are small: note that the lower-dimensional equator is symplectically “large” while caps with positive area are “small.”

Let me remind myself that I should attempt to blog some time about deep recent results on “null-sets” in Eucledian spaces connected also to combinatorics. (Meanwhile, here are links to two papers by  Alberti, Csörnyei, and Preiss, Structure of null sets in the plane and applications, and Differentiability of Lipschitz functions, structure of null sets, and other problems.)

Trivia question

In theoretical computer science, what do the terms “Yaelism” and “Adamology” refer to?

Camels and elephants through the eye of a needle

The term “symplectic camel” refers to a quote of Jesus: “It is easier for a camel to pass through the eye of a needle than for a rich man to enter heaven.” (I learned about it when it was used in Aram Harrow’s reply in one part of our quantum debate, See also this post on The Quantum Pontiff ) A nice discussion of this phrase can be found here. The Jewish analog (from the Talmud) is about passing an elephant through the eye of a needle which is used to describe making convoluted arguments which have no basis in reality, attributed to the scholars of Pompeditha. Rabbi Shesheth answered Rabbi Amram “Maybe you are from the school at Pumbeditha, where they can make an elephant pass through the eye of a needle.”

Greg Kuperberg referred in this context to:  Rowan Atkinson – Conservative Conferencecamel

“It is easier for a rich man to pass through the eye of a needle than…for a camel to!”

An additional clarification on symplectic camels by Yael

Gromov Nonsqueezing and the Symplectic Camel Theorem are two different theorems.  (The Wikipedia article is unclear about this.) The Symplectic Camel theorem says this. Take  R^{2n}, say, with coordinates  x_i,y_i. Remove the hyperplane y_n=0  minus a ball of radius r   in this hyperplane.

Thus, we get two “rooms” separated by a “wall” but with a “hole” in the wall.  If  R<r  then a ball of radius  R  that lies entirely in the “left room”   y_n < 0  can be continuously translated through the “hole” into the “right room”  y_n > 0. If  R>r  we cannot move the ball to the “right room” continuously by translations nor rigid transformations but we can move it through volume preserving transformations (by first squishing it so it becomes long and narrow).  The theorem says that we cannot do this symplectically: if  R > r  then there is no path of symplectic embeddings of the ball of radius  R   that starts with an embedding into  y_n < 0  and ends with an embedding into  y_n > 0. (An interesting open question is whether there exists a “compact camel“: a compact symplectic manifold in which the space of symplectic embeddings of a ball of some fixed radius   R  into the manifold is disconnected.)

The Quantum Debate is Over! (and other Updates)

Quid est noster computationis mundus?

Nine months after is started, (much longer than expected,) and after eight posts on GLL, (much more than planned,)  and almost a thousand comments of overall good quality,   from quite a few participants, my scientific debate with Aram Harrow regarding quantum fault tolerance is essentially over. Some good food for thought, I hope. As always, there is more to be said on the matter itself, on the debate, and on other”meta” matters, but it is also useful to put it now in the background for a while, to continue to think about quantum fault tolerance in the slow pace and solitude, as I am used to, and also to move on in other fronts, which were perhaps neglected a little.

Here are the links to the eight posts: My initial post “Perpetual Motion of The 21st Century?” was followed by three posts by Aram. The first “Flying Machines of the 21st Century?” the second “Nature does not conspire” and the third “The Quantum super-PAC.” We had then two additional posts “Quantum refutations and reproofs” and “Can you hear the shape of a quantum computer?.” Finally we had  two conclusion posts: “Quantum repetition” and “Quantum supremacy or quantum control?

EDP 22-27

We had six new posts on the Erdos Discrepancy Problem over Gowers’s blog (Here is the link to the last one EDP27). Tim contributed a large number of comments and it was interesting to follow his line of thought.  Other participants also contributed a few comments. One nice surprise for me was that the behavior of the hereditary discrepancy for homogeneous arithmetic progression in {1,2,…,n} was  found by Alexander Nikolov and  Kunal Talwar. See this post From discrepancy to privacy and back and the paper. Noga Alon and I showed that it is {\tilde{\Omega}(\sqrt{\log n})} and at most {n^{O(\frac{1}{\log\log n})}}, and to my surprise Alexander and Kunal showed that the upper bound is the correct behavior. The argument relies on connection with differential privacy.

Möbius randomness and computation

After the AC^0-prime number theorem was proved by Ben Green, and the Mobius randomness of all Walsh functions and monotone Boolean function was proved by Jean Bourgain, (See this MO question for details) the next logical step are low degree polynomials over Z/2Z . (The Walsh functions are degree 1 polynomials.) The simplest case offered to me by Bourgain is the case of the Rudin-Shapiro sequence. (But for an ACC(2) result via Razborov-Smolensky theorem we will need to go all the way to polynomial of degree polylog.) I asked it over MathOverflaw. After three months of no activity I offered a bounty of 300 of my own MO-reputations. Subsequently, Terry Tao and Ben Green discussed some avenues and eventually Tao solved the problem (and earned the 300 reputation points). Here is a very recent post on Möbius randomness on Terry Tao’s blog.

Influences on large sets

In my post Nati’s Influence I mentioned two old conjectures (Conjecture 1 dues to Benny Chor and Conjecture 2) about influence of large sets on Boolean functions. During Jeff Kahn’s visit to Israel we managed to disprove them both. The disproof is inspired by an old construction of Ajtai and Linial.

Tel Aviv, New Haven, Jerusalem

Last year we lived for a year in Tel Aviv which was a wonderful experience: especially walking on the beach every day and feeling the different atmosphere of the city. It is so different from my Jerusalem and still the people speak fluent Hebrew. I am now in New Haven. It is getting cold and the fall colors are getting beautiful. And it also feels at home after all these years. And next week I return to my difficult and beautiful  (and beloved) Jerusalem.

The Quantum Fault-Tolerance Debate Updates

In a couple of days, we will resume the debate between Aram Harrow and me regarding the possibility of universal quantum computers and quantum fault tolerance. The debate takes place over GLL (Godel’s Lost Letter and P=NP) blog.

The Debate

Where were we?

My initial post “Perpetual Motion of The 21st Century?” presented my conjectures regarding how noisy quantum computers and noisy quantum evolutions really behave.

Aram’s first post was entitled “Flying Machines of the 21st Century?” It mainly dealt with the question “How is it possible that quantum fault-tolerance is impossible (or really really hard) while classical fault tolerance is possible (and quite easy). Aram claimed that my conjectures, if true, exclude also classical computers.

Aram’s second post entitled “Nature does not conspire” dealt mainly with correlated errors. Aram claimed that it is unreasonable to assume strong correlation of errors as my conjectures imply and that the conjectured relation between the signal and noise is in tension with linearity of quantum mechanics.

Aram’s third  post “The Quantum super-PAC”  raised two interesting thought-experiments and discussed also intermediate models.

Each post ended with a small rejoinder, and included a short description of the ealier discussion.  The discussion was quite extensive and very interesting.

What’s next

Aram and Steve Flammia wrote an interesting manuscript with appealing counterexamples to my Conjecture C. Our next planned post (it now has appeared) will discuss this matter.

Next, I will conclude with a post discussing Aram’s two main points from his first and second posts and some related issues which I find important.

These posts are mostly written but since Aram was busy with pressing deadlines we waited several weeks before posting them. I also enjoyed the break, as the extensive discussion period was quite tiring.

A very short introduction to my position/approach

1) The crux of matter is noise

Continue reading

A Discussion and a Debate

Heavier than air flight of the 21 century?

The very first post on this blog entitled “Combinatorics, Mathematics, Academics, Polemics, …” asked the question “Are mathematical debates possible?” We also had posts devoted to debates and to controversies.

A few days ago, the first post in a discussion between Aram Harrow, a brilliant computer scientists and quantum information researcher (and a decorated debator), and myself on quantum error correction was launched in Dick Lipton and Ken Regan’s big-league blog, Gödel’s Lost Letter and P=NP.

The central question we would like to discuss is:

Are universal quantum computers based on quantum error correction possible.

In preparation for the public posts, Ken, Aram, Dick, and me are having very fruitful and interesting email discussion on this and related matters, and also the post itself have already led to very interesting comments. Ken is doing marvels in editing what we write.

Dick opened the post by talking on perpetual motion machines which is ingenious because it challenges both sides of the discussion. Perpetual motion turned out to be impossible: will quantum computers enjoy the same fate? On the other hand (and closer to the issue at hand), an argument against quantum mechanics based on the impossibility of perpetual motion by no other than Einstein turned out to be false, are skeptical ideas to quantum computers just as bogus? (The answer could be yes to both questions.) Some people claimed that heavier-than-air flight might is a better analogy. Sure, perhaps it is better.

But, of course, analogies with many human endeavors can be made, and for these endeavors, some went one way, and some went the other way, and for some we don’t know.

Although this event is declared as a debate, I would like to think about it as a discussion. In the time scale of such a discussion what we can hope for is to better understand each other positions, and, not less important, to better understand our own positions.  (Maybe I will comment here about some meta aspects of this developing discussion/debate.)

A real debate

A real emerging debate is if we (scientists) should boycott Elsevier. I tend to be against such an action, and especially against including refereeing papers for journals published by Elsevier as part of the boycott. I liked, generally speaking,  Gowers’s critical post on Elsevier, but the winds of war and associated rhetoric are not to my liking.  The universities are quite powerful, and they deal, negotiate and struggle with scientific publishers, and other similar bodies, on a regular  basis. I tend to think that the community of scientists should not be part of such struggles and that such involvement will harm the community and science. This is a real debate! But it looks almost over.  Many scientists joined the boycott and not many opposing opinions were made. It looks that we will have a little war and see some action. Exciting, as ever.

Aaronson and Arkhipov’s Result on Hierarchy Collapse

hc

Scott Aaronson gave a thought-provoking lecture in our Theory seminar three weeks ago.  (Actually, this was eleven months ago.) The slides are here . The lecture discussed two results regarding the computational power of quantum computers. One result from this paper gives an oracle-evidence that there are problems in BQP outside the polynomial hierarchy.  The method is based on “magnification” of results on bounded depth circuits. (It is related to the Linial-Nisan conjecture.)

The second result that we are going to discuss in this post (along with some of my provoked thoughts) is a recent result of Scott Aaronson and Alex Arkhipov which asserts that if  the power of quantum computers can be simulated by digital computers  then the polynomial hierarchy collapses.  More precisely, their result asserts that if sampling  probability distribution created by quantum computers (this is abbreviated as QSAMPLE) is in P then the polynomial hieararchy collapses to its third level.

The beautiful and important paper of Aaronson and Arkhipov is now out. Its main point of view is related to what I describe in the sixth section about “photon machines”. Update: Let me mention related idependent results by Michael J. Bremner, Richard Jozsa, Dan J. Shepherd in the paper entitled: “Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy“.

Here is the plan for this post

1) The polynomial hierarchy and results about hierarchy collapse

2) The Aaronson Arkhipov (AA) theorem and its proof

3) Two problems posed by AA

Those are: does P=BQP already leads to a collapse of the polynomial hierarchy? And does APPROXIMATE-QSAMPLE already leads to a collapse?

4) Does fault tolerance allow QSAMPLE (on the nose)? (Answer: yes)

5) Is there a quantum analog to Aaronson and Arkhipov’s result and what is the computational power of quantum computers?

6) Three Two competing scenarios

7) Aaronson and Arkhipov photon machines and the complexity class BOSONSAMPLING.

Continue reading