Here are links to a videotaped lecture in two parts entitled “why quantum computers cannot work” recorded at the Simons Institute for the Theory of Computing on December 2013 and two additional videos: a short talk on topological quantum computers and a twelve minute overview. Here are the links: Overview, Part I, Part II, Topological QC.

## Why Quantum Computers Cannot Work:

**Overview and Vision.**

## Why Quantum Computers Cannot Work I:

**From the “Trivial Flow” to Smoothed Lindblad Evolutions**

## Why Quantum Computers Cannot Work II:

**Debate, Reasons to Disbelieve, and Experimentation**

## Why Topological Quantum Computers Cannot Work

**The Geometry of Spacetime is Enabled by the Failure of Quantum Fault-Tolerance**

Left: Nick Read; Right The front page of Nick’s 1990 famous paper with Greg Moore on nonabelions, and below his email to me from March 2005 on topological quantum computation. (click for full view.)

Left: the argument regarding topological QC demonstrated via Harris’ famous cartoon. While not strictly needed I expect the argument to extend from qubits to gates as well. Right: a recent discussion with Nick over Shtetl Optimized (click for full view). **Update**: We are actually not in an agreement as it seems from the above discussion (see the discussion below).

**Update:** A subsequent post by Steve Flammia, Quantum computers can work in principle over The Quantum Pontiff. (July:) See also this post: Quantum future” just beyond our grasp.

Added later (April 18): Let me quote from what Steve wrote about the videos: *The surprising part is the superior production value relative to your typical videotaped lecture (at least for the first overview video). *Producing the videos was an interesting and demanding experience and I was certainly happy to read Steve’s description of the production value. (Of course, the main purpose of Steve’s post was to express his disagreement with the *content of the videos.* See either the post or Co-‘s comment below.)

Also there are two earlier versions of my lecture (in 1-hour format) that were videotaped. The first was taken by Jesus De Loera in Davis. Very interesting shooting-angle and interesting comments by Greg Kuperberg, Bruno Nachtergaele and other participants. The second was taken in Seattle in a UW-PIMS colloquium lecture. Again interesting questions by several participants including James Lee and John Sidles.

(July:) The Simons Institite (almost identical) versions of the movies are now linked from the web-page of my November 15 lecture at SI.

For beginners like me who need help with some of the vocabulary used here, such as “anyon,” you can look at:

http://www.quantiki.org/wiki/Anyons

or

http://en.wikipedia.org/wiki/Anyon

Thanks, Joe! Anyons are indeed fascinating objects related to deep mathematics. For the connection to quantum computing see, for example, this review paper by Chetan Nayak, Steven H. Simon, Ady Stern, Michael Freedman, and Sankar Das Sarma.

My fourth video describes an argument for why we cannot expect that anyonic qubits will be more stable then ordinary qubits. I think that this argument extends to anyonic gates as well. It would be interesting to consider in detail proposed implementations like those in the review paper above.

This argument does not say anything about the fascinating question whether nonabelions are realistic. But the second part of the fourth video may be related to this question. There is some hope that we one can show that nonabelions (of low entropy) requires for their creation quantum fault-tolerance. Another thing that I am interested to know is to what extent the noise-sensitivity argument for Bosonic (and fermionic) states from the second long video (part VI) can be extended to anyons. The difficulty is that we do not have analogs of “decomposible” states like in the case of exterior and symmetric algebras.

Gil, I have posted the following appreciation of your (wonderful) lectures to the

of Dave Bacon, Charlie Bennett, Steve Flammia, and Aram Harrow. There is plenty of room for *ALL* of you to be right (as it sems to me). Good!Quantum Pontiffweblog————–

It All Turns on Naturality(Part I of II)Steve, please let me make the case (to students especially) that

rationalgrounds exist to to disagree utterly with all six of the above statements.Moreover, please let me assert (more broadly) that it is neither necessary, nor feasible, nor even desirable, that everyone think alike in regard to

anyof these six statements.And finally — and most difficult of all — please let me make this case

withoutpersonalization, sardonicism, rancor, or cherry-picking.Because we’re talking about

ideas, not individuals, right?WarmupA recommended four-part counter-sardonic warmup is, first, to listen to the Van Cliburn Foundation’s playlist “2013 Cliburn Competition Competitors” (queue up all 30 videos, and reflect upon the sobering statistical realities of excelling as an artist/mathematician/scientist/engineer on a planet of 10^10 people). Second, listen to Wendell Berry’s 2012 Jefferson Lecture “It All Turns on Affection” (reflecting soberly upon the “marketplace of ideas”). Third, (re)read Clay Shirky’s much-discussed “The End of Higher Education’s Golden Age” (reflecting soberly upon how “Golden Ages” are created).Fourth, and most important, (re)read Arthur Jaffe and Frank Quinn’s sardonic and sometimes-personal “‘Theoretical Mathematics': Toward a Cultural Synthesis of Mathematics and Theoretical Physics” (arXiv:math/9307227), and Bill Thurston’s straightforwardly impersonal response “On Proof and Progress in Mathematics” (arXiv:math/9404236) (reflecting soberly that all of the Jaffe/Quinn/Thurston issues are in-play in the context of modern complexity theory and/or quantum computing).

These great works (Bill Thurston’s especially) set the stage for us to appreciate the concluding remarks of Gil’s video lecture (minute 13:16 of “Why Topological Quantum Computers Cannot Work”)

In a NutshellGil Kalai brings the generous & vital spirit of Bill Thurston to discourse regarding quantum computing and complexity theory … and for this service we can all be grateful to Gil!—–

Part II(to follow) From an engineering perspective, the formal elements of Gil’s “Smoothed Lindbladian” modelsalreadyare widely and successfully applied … and these applications will be the subject of Part II.For Part II of “It All Turns on Naturality” — which surveys concrete realizations in varietal dynamical systems of the Kalai Conjectures — see (briefly) the above

Quantum Pontiffcomments or at greater length our Soldier Healing Seminar’s “Green Sheet” notes for 2013.Please let me express (again) my profound appreciation and sincere thanks — without regard differences of opinion, and especially on behalf of medical researchers — to

allwho have participated so creatively in the Kalai/Harrow discourse.This thanks is extended particularly to Aram Harrow, Gil Kalai, Steve Flammia, Dave Bacon, John Preskill, Scott Aaronson, Dick Liption, Ken Regan, and anyone else whose name(s)

shouldhave occurred to me. Thank you all!As a supplement to Part II, there is now a brief historical survey Alexander Grothendieck’s enigmatic-yet-celebrated question (celebrated among mathematicians and historians anyway)

“What is a meter?“Gil,

Thanks for this unexpected publicity!

I’d like to clarify/follow up on my earlier remarks about topological quantum computation (TQC) with nonabelian anyons. The concern was that, while it seems we both agree that time evolution of well-separated anyons should be inherently robust against on-going errors (because coupling to the environment is through local operators only), the initialization step for the anyons would involve them not being well-separated for a short time, and during that time they would be potentially vulnerable to errors. Then time evolution of such faulty initial states would of course give poor results. Then it may seem that TQC would not have intrinsic fault tolerance, and would be no better in principle than ordinary QC methods using qubits and gates. I’ll admit to worrying over this briefly.

But there are also methods for preparing the initial state of nonabelions that are themselves protected against errors in principle. Usually, these involve measurements. Suppose for example

that we use the popular model, and so a pair of nonabelions has two states which I will call 0 and 1. For computation we want all pairs to be 0 initially. After creating pairs of these anyons, we make the members of each pair well-separated, so now we agree that, whatever their state is (a superposition of 0 and 1), it is robust against errors. Then we make a measurement on each pair, with result 0 or 1 for each. This can be done by passing further nonabelions around a pair, far away, and using interferometry, and is again robust against errors (indeed, the measurement can be done repeatedly as a check.) This projects each pair to either 0 or 1. Then even if there is some chance that during preparation of a pair we get 1 instead of 0 with some frequency, we can use more pairs than we need for computation, and discard the ones in the 1 state so that we have enough 0s for our computation. This can be done with error as small as you wish and only polynomial time overhead. (Thanks to Steve Simon for help with this argument; Steve is one of the authors of the review cited by Gil above, where more information about these kinds of measurements can be obtained.)

So you see that there is much more here that you will need to explain away in order to refute the mere possibility of TQC!

Nick

Dear Nick,

Many thanks for your comment. It is great to host you on my blog, and in accordance with my policy that every bit of reputation gained on my blog (e.g. by participation of famous people) will be wasted here too, your comment means that, like in similar cases, I will have to share the readers with an entertaining yet embarrassing story (like this one or this one or this one or this one) or something like that.

I will think more about your comment before referring to details (and I think I owe answers to a few points you raised in Scott’s blog). Let me just mention now that my main objective is not to refute the mere possibility of QC or TQC, but to offer the absence of quantum fault-tolerance as a powerful tool to explain our (quantum) reality.

“… my main objective is not to refute the mere possibility of QC or TQC, but to offer the absence of quantum fault-tolerance as a powerful tool to explain our (quantum) reality.” Is it likely that nature has an absence of quantum fault-tolerance if and only if Bell’s theorem is false? Who might be some of the main physicists who believe that Bell’s theorem is false? Have there been any collaborations between students of Gerard ‘t Hooft and students of Gil Kalai?

Nick,

“So you see that there is much more here that you will need to explain away in order to refute the mere possibility of TQC!”

This is what I wrote in the last post of our debate on this matter:

So, as you can see from this quote, I agree that there is much more to be explained. I may have some reservations regarding what “you” should mean. The way I see it it should mainly apply to the TQC people themselves – they should be the chief skeptics of the endeavor.

My argument, in general is this: you look on the one side at the experimental process you propose for creating your anyonic qubits and performing gates on them (this corresponds to “experimental miracles occur” in the pair of pictures above) and you also look at a (noisy) quantum computer that simulate your experimental process.

If you can achieve noiseless computation on the experimental side then you should be able to demonstrate it on the noisy computer side. If the evolution on the experimental side does not enact (the full complicated apparatus of) quantum fault-tolerance then you should be able to achieve noiseless quantum computation on the noisy computer side without the complicated process of quantum fault-tolerance as well.

This argument suggests that if you look at concrete proposed implementations of the Nayak et als paper you will discover that both the qubits involves mixtures of logical errors and the gates are imperfect so they involve mixture of the desired outcome with a cloud of logical errors. But, as I said in the debate, this is something yet to be elaborated for detailed specific implementations. Regarding Simons’ measurement-based “purifying” of anyonic qubits. This is a sort of simple form of (partial) quantum error-correction and I don’t see what is special about anyonic qubits in this proposal.

Gil

Gil,

Earlier you agreed that the evolution of well-separated nonabelions will not generate

errors (i.e. the gates are not imperfect), so you should be addressing the initialization question. The measurement-based purification is not error correction at all, but initialization. Of course it is used in other implementations (say with qubits). What is different here is that we use the phase picked up when a nonabelion passes around a pair

at some large distance, which you agreed is topologically protected. The phase is measured by interferometry and this can be made as reliable as one wishes, it seems.

As for your argument that it should be possible to simulate the whole thing on a noisy QC, and hence if the latter does not work, neither can the former: this seems still to be question begging (as both Scott and I said before). It is logically possible (that is, playing devil’s advocate) that the conventional noisy QC fails to work, and the TQC does work. If so it just shows that the TQC does a better job. (Then you should use the TQC in place of the other implementation when simulating a universal QC.) It’s also not clear why the success of the topological implementation (without any extra apparatus) means it must be possible to simulate it on a conventional QC without the need for error correction there either. The two are simply different. Of course, it is also possible that both work, and that you are wrong.

Obviously I’m not claiming that we (on the topological side) already know all about how to implement the concept in every detail; only that the original idea that initializing will necessarily be vulnerable to errors, and so will need additional fault tolerance/error correction, is not correct, at least not in the form in which it was envisioned.

Nick

Nick,

“It’s also not clear why the success of the topological implementation (without any extra apparatus) means it must be possible to simulate it on a conventional QC without the need for error correction there either.”

As I see it, indeed the essense of my argument (and all that is needed) is:

This looks correct to me and I will be happy to clarify it either on the abstract level or (but this will require more effort) by looking carefully at a specific implementation.

If (*) is correct then I expect that this implies that the computation with topological qubits and gates does not perform better than a computation with ordinary noisy qubits and gates

without quantum fault-tolerance. (And, yes, this should apply to qubits and to gates.)Gil

Dear Nick: Here is something that you wrote on Shtetl Optimized that I did not answer. In the heat of the discussion it was delayed, and your nonabelions took higher priority, so let me reply over here on this cozy blog. Of course, I amply talk about it the videos and in my debate with Aram but you are surely entitled to a personal answer and a fresh thought.

I did give several reasons to prefer my convoluted noise models over the possibility of quantum supremacy and quantum fault-tolerance. This is the beginning of “Part VII” of my talk in video 2 28:00-35:00. I also do not agree with the “not otherwise observed”- what we observe in nature is that approximations/noise respect internal structure and symmetries of the “signal” and this actually goes along with my point of view, and against the view of noise based on “computational basis.” I do agree with the “not otherwise expected” part. Right now Aram and I (talking with some colleagues) are trying to see what the standard Hamiltonian noise models tell us about the life-time and decay behavior of a single superconducting qubit. (Indeed Aram expects that Hamiltonian models like those of this paper by Preskill will give the correct picture, and I disagree.) I also did explain in several places why and how these “mysterious” correlated errors arise, and why they are less mysterious than it seems, e.g. Video 1 56:00 – untill the end.

As a matter of fact, not giving up the extended Church-Turing thesis was not on the top of my list (it is the last of seven items on my list) for my reasons to disbelive quantum computers. After so many years of hearing about quantum computing and post-classical computation and cryptography, I also felt a little blasé about it. So why not give up the extended Church-Turing thesis? Let me quote Seth Lloyd (as quoted in my overview):

What Seth says is

absolutely correctand it is simply a more vivid way to say what giving up the extended Church-Turing hypothesis means.Now, we do have non conventional weapons, but we cannot really expect a non conventional bomb with more power than a conventional bomb of the size of the entire universe, right? And we cannot hope for a machine with more horsepower than the number of horses that can be packed in the entire universe. When was the last time we witnessed human-made changes scaled like the accumulative impact of the entire universe? So, while not impossible, I do not think that giving up the extended Church-Turing thesis is something to be taken lightly. (In fact, I myself took it too lightly along the years.)

I can try answering this last objection of Gil’s. Let’s look at New London, CT and Orient Point, NY (on Long Island). Suppose you wanted to drive between them. Google Maps says that driving between them takes three hours and forty minutes, and the average speed is slightly faster than 50 miles an hour, probably somewhat unrealistic considering you have to drive by New York City. But there’s another way to get there! You could take the ferry. Since this only takes one hour and twenty minutes, it clearly goes 2.75 times as fast, so it must be going well over 130 miles per hour.

I’ve been on that ferry, and it sure didn’t seem we were going that fast.

I hope everybody reading this has seen the flaw in my reasoning.

But this is also the flaw in Gil Kalai’s reasoning—the difference between classical algorithms and quantum algorithms is that quantum algorithms take a different route to get to the answer—a route that is not accessible to classical computers. You don’t expect all classical algorithms to take the same number of steps to the solution—the number field sieve factors numbers much faster than trial division. Why do you believe that quantum algorithms, which take a different route, cannot solve some problems faster than classical problems?

Dear Peter,

Thanks, as always, for your comment.

Of course, we agree that a different route can lead to more power. In my comment, I gave the example of nuclear weapons where we gain five or six orders of magnitude compared to conventional explosives and you gave an example where we gain a factor of 2.75 by taking a ferry. My remark was that proposing a different route which makes a difference whose magnitude is correctly described by Seth

in terms of the size of the entire universe, is more dramatic and hence less plausible and, while not impossible, should not be taken lightly at all.We also share the belief that quantum algorithms can solve problems much faster than classical algorithms, but I expect that a realistic noise modeling does not support the abstract model of quantum algorithms. On the physics side the way I see it is as follows: there is an important concept of locality which is crucial in describing our quantum reality and comes on top of the very basic framework of QM. The crucial question regarding QC is how to model errors and approximations. The ordinary models impose locality on the errors and thus allow fantastic non local states to be achieved for the computation. My proposal is to have a noise model where locality emerges from the model itself both for the nature of errors and for the nature of the computational outcomes.

On a lighter note, I had a quantum-related experience similar to your Google-Maps story. I tried to invite Michel Devoret in person to QSTART and checked with Google Maps how to reach the address of his office from my office at Yale. Looking at the map itself I then realized that our offices are essentially in the same building which gave a different route with substantial speed-up compared to the Google Maps suggestion.

I admire greatly Michel and Rob Schoelkopf’s amazing progress on superconducting qubits (and other things) but I regard it implausible that their efforts will lead to a new route, more powerful, for sampling permanents of a 50 x 100 matrix or for factoring 2000-digits integers. It is much more likely, in my opinion, that their work will be part of a new route for better understanding some aspects of quantum physics.

For the largest numbers factored on a conventional computer, the speed-up from trial division to number field sieve is already on the order of the computational power of the entire universe. So if you judge the quantum algorithm speed-up by algorithmic standards, it’s not that big. On the other hand, if you judge it by order-of-magnitude improvement of our control of physical phenomena, it’s enormous.

Dear Gil,

Your idea with limited horsepower sounds reasonable for me, but how it is related with fault tolerance? Let’s consider a thought experiment: someone has working quantum computer, but after factoring test the results are encoding using standard method with addition of random numbers. So nobody may use results (without knowledge of the random numbers), but we may not say that addition of the noise reduces power of the quantum computer itself.

Why the fault tolerance problems may not be considered as a similar example then Nature is working with huge horsepower, but we simply may not use or control that? So, I could suppose, that fault tolerance and limited horsepower idea (aka ECT?) are two different problems.

Dear Alexander, Your porposal: that nature works in much higher “horsepowers” that we simply cannot “use” or “control” (or even cannot give direct evidence to) seems implausible to me (but not impossible).

Gil, Sorry if I was unclear, but I wrote about different thing: it is not clear for me, how failure of quantum computing

due to noisemay restore ECT. It is even not proposal, I supposed that my thought experiment raised such suggestion in a status of some “lemma”.Let me explain idea of my thought experiment better:

I have a system quantum computer + noise and so it not less power than quantum computer alone. So if quantum computer disprove ECT some *additional* effect may not restore that because for modeling of such system we must (1) model quantum computer (2) model the noise. I only need to add, that it is rather some intensification of ECT adapted for “horsepower” idea, because some weak (“dishonest”) version of ECT may accept in such a case model with noise without modeling previous process with claim that it is not possible to distinguish result without knowledge about noise. But such “dishonest” ECT may not work with horsepower idea, because in such a case for estimation of the horsepower you may model all processes.

Physically speaking, Hilbertism’s appreciation of TQC is substantially grounded in the (postulated) invariant reproducibility and unbounded accuracy of the quantum metrology triangle (QMT, a.k.a. , with , , and measured independently), and in particular, on the topologically robust quantum coherence that is associated to both the measurement of (via the inverse AC Josephson Effect) and the measurement of (via the integer Quantum Hall Effect (IQHE) ).

There are at least three grounds for humility in regard to whether the QMT’s success (to date) assures the future success of TQC:

•

Reflection IOur present understanding of noise mechanisms generally in QMTs, and particularly in the IQHE, is very far from comprehensive (e.g, per the Weiss and von Klitzing workcited below).•

Reflection IIIt is not presently known whether an entirely satisfactory simulation of QMTs in general (and IQHE measurement processes in particular), might perhaps achievable with PTIME computational resources, within the restricted domain of varietal dynamics (regarded as a formal model of the Kalai Postulates, and compatibly, arestrictionof Hilbertism to Dirac-Maxwell dynamics).•

Reflection IIIStrenuous experimental efforts to observe nonabelian anyon dynamics have not succeeded to date, even for the simplest systems (i.e.,one or two anyons, although to be sure, some tantalizing hints *are* seen).——————

In light of these reflections, here is an engineer’s appreciation of the Kalai/Harrow debate (which henceforth is depersonalized by calling it the “Hilbertism/Arnol’dism” debate):

Hilbertism asserts“Quantum computing has left the Extended Church-Turing Thesis hanging by a fingernail”; and so we can confidently foresee that TQT (or similar advances in quantum computing) will “force Gil Kalai to admit that he was wrong.”Arnol’dism assertsLa marée étale(“the slack tide”) of 20th century Hilbert-space dynamics has already given way to a 21st centurymarée montante(“flood tide”) of varietal dynamics; first QMTs, and then TQTs — and eventually (we may hope)allof computational dynamics — are destined to be immersed in the rising tide of our ever-deepening mathematical understanding.NoteThe name “Arnol’dism” is proposed, first, to honor Vladimir Arnol’d’s creative lifetime achievements, that spanned multiple domains of mathematics, physics, engineering (and even “Pushkinistics”), and second, to honor Arnol’d’s maxim “It is not shameful to be a mathematician but it is shameful to beonlya mathematician.”Arnol’d’s maxim directs our reflections toward …

Grothendieck’s WagerSupposing that decades from now (or even centuries from now) Hilbertism is proven to be entirely correct (e.g., by the experimental demonstration large-scale fully-programmable FTQCs), such that Arnol’dism is entirely discredited, then nonethelessit is rational to ardently embrace Arnol’dism’s marée montante— at least during the next few decades fo the 21st century — on the pragmatic grounds that an ever-hotter planet, with ten billion hominids upon it, can scarcely afford to deprecate the ever-improving Arnol’dian simulation technologies upon which the 21st centuryalreadyrelies to supply itsmarée montanteof absolute requirements for food, clothing, shelter, and medical advances.To say nothing of the already-urgent and ever-mounting need to create family-supporting jobs that supply these needs.

ConclusionEven if Hilbertism is true, our generation of mathematicians, engineers, and scientists is well-advised to embrace Arnol’dism wholeheartedly, so thatla marée montanteof our mathematical understanding — and in consequencela marée montanteof our PTIME simulation capabilities — rises as high as feasible, and as fast as feasible, for as long as feasible.Dear John, thanks for your comments. Overall I think that while it is useful to pose an alternative for the perception that QCs are possible it is premature to try to convince believers to change their mind and, in any case, there are experimental efforts that may shed light on the matter in the next years. Scott’s quote (2) refers to BosonSampling which is interesting but something of an offshoot of main QC anyway. For BosonSampling I do think that noise sensitivity gives a clear explanation for why realistic BosonSampling leads to a flat distribution (and is thus computationally trivial) and hence experimentally the observed distribution will have no correlation with the distribution expected by permanents when the number of bosons grows. Because of the strong noise-sensitivity effect I expect that this will be shown for a handful of bosons. So, on this matter I do expect Scott (and others believers in the possibility to use BosonSampling to support “quantum supremacy”) to change their minds in the months to come.

Update:(March 27) What we expect now (see this comment) is that the noisy distribution willnotbe flat but still bounded away from the expected distribution even when the noise is small.I often explain it in terms of the difference between two multiplication algorithms.

One is the first one that kids learn (17 x 11 = 17 + 17 + … + 17), and the other is the O(n^2) one we learn a little later.

I agree that the failure of the ECT thesis is a little surprising. But I think it is _less_ surprising than Bell inequalities, the Aharonov-Bohm effect, the uncertainty principle and tunneling. These basic features of quantum mechanics are pretty uncontroversial. But once you accept that they are the building blocks of our world, fast factoring algorithms seem less outrageous.

See Leonid Levin’s comments here: https://groups.google.com/forum/m/#!msg/sci.physics.research/GE5cz3xefCc/e0eh34MZGdwJ

He nails it.

Prof. Kalai: ‘t Hooft claims that the one testable prediction of his theory of superstring determinism is that quantum computers won’t work well enough to be useful. Do you agree that ‘t Hooft’s theory really does make this prediction? Also I have attempted to convince string theorists of the following claim: The main problem with string theory is that string theorists fail to realize that Milgrom is the Kepler of contemporary cosmology. If Milgrom’s acceleration law is wrong, then what is the explanation for the flyby anomaly?

Does the Fernández-Rañada-Milgrom Effect Explain the Flyby Anomaly?

Is it likely that a quantum theory of gravity would imply that quantum computers won’t work?

Gil:

As I understand your “why topological quantum computers cannot work”, your argument is that topological quantum computers could be simulated in the gate model, and we know the gate model cannot work, so topological quantum computers cannot work.

It seems that this misses the point: anyons are a physical phenomenon, and you should address how the failure of quantum topological computation is reflected by the physical phenomenon of anyons. What does this mean for materials containing non-abelion anyons. It seems there are several possibilities. Some of these (I’m not giving an exhaustive list) are:

• anyons cannot actually exist.

• anyons do exist, but cannot have the properties (predicted by Chern-Simons theory) that they require for universal computation.

• anyons do exist, but interact with each other even when they are very far apart, and thus do not sustain quantum computation.

• anyons do exist, but any material that permits anyons must permit spontaneous pair creation of anyons at a rate incompatible with its use for quantum computation.

• anyons do exist, and they behave exactly as predicted by the physics theories except when you try to use them for quantum computation, and then mysterious errors arise which make the quantum computation fail.

The progress on creating anyonic materials is proceeding at a rate that within a decade or so I expect people may be able to control these materials reasonably well. If you want physicists to believe you have a real theory, you should predict exactly which of these possibilities is the way in which topological quantum computation will fail.

Hi Peter,

Not quite, but let me rephrase my argument using your wording:

I would word it as follows:

.. topological quantum computers could be simulated in the gate model, and we know that the gate model cannot work

without the quantum fault-tolerance apparatus, so topological quantum computers cannot work.”The argument is more specific than that: we have a picture on what will happen for a quantum computer with the qubit/gate architecture when quantum fault-tolerance

is not applied and errors accumulate.For example, when you create a quantum error-correcting code there will be a substantial amount of logical errors.Now, suppose that (1) you have a quantum computer which does not apply quantum fault-tolerance, (2) you have a quantum algorithm that creates a surface code, and (3) you succeed to lower the qubit/gate noise so that the target state will be close enough to the intended state. Then the state you will achieve is a mixed-cloud of codewords around the intended codeword (in addition to some errors in the computational basis.)

So what I say is that the experimental process to create an anyonic qubit will behave just like a quantum algorithm that you run on such a quantum computer.

Again, the crucial point is

notthat this qubit/gate computer cannot work, but that (without quantum fault-tolerance) when such a qubit/gate computerdoes workwe can tell what will be the nature of quantum states it gives.Lets see if we can agree on this point and then I will be happy to think about the further points that you have raised.

Hi Gil,

The quantum state of any material which allows anyons is already a quantum error-correcting code, so there’s no reason we shouldn’t be able to use the gate model without fault-tolerance to simulate this. I suppose initializing the quantum state of the anyons might be a problem, but Nick has already explained how to do that.

Hi Peter, Just before we move to “the material that allows anyons”, is my argument above clearer?

“The quantum state of any material which allows anyons is already a quantum error-correcting code,”

I am not sure what precisely you mean by the “material which allows anyons.” .

In any case, for me when you talk about “a quantum error-correcting code” the main question is how does a non-zero entropy element in this “quantum error-correcting code” behave.

Do we have:

A) a mixture of codewords or

B) a delta function in the Hilbert space of codewords

(both perhaps + some “local noise”)

“I suppose initializing the quantum state of the anyons might be a problem, but Nick has already explained how to do that.”

The argument that I outlined above suggests that in any way you go about it in your laboratory you will get noisy quantum error correcting codes of type A (and not of type B). This does not apply just to “initialization” but also to any intermediate state. So it will apply also to an attempt to experimentally implement Nick’s suggestion.

Hi Gil,

The two options you are giving me are completely inadequate to choose between.

Of course, we will always get a mixture of codewords. The question is, can we reduce this mixture so it occupies an exponentially small volume with polynomial overhead? That’s the point of quantum error correction.

Peter, I should have made myself clearer (this is something I repeat again and again in my lectures). When I say “mixture” I mean that there will be a substantial mixture of logical errors. My entire point is that you cannot substantially improve the quality of “logical qubits” compared to the “physical qubits” this is what you cannot do with a noisy quantum computer without the quantum fault tolerance apparatus and this is what I expect you cannot do also in “ordinary” experimental processes that do not involve quantum fault-tolerance.

[Again, if the logical errors are negligible for me it is option B.]

Hi Gil,

For topological quantum computation, the idea is that you

canimprove the ratio of logical qubits to physical qubits by (1) doing a better job of initialization and (2) by keeping the anyons further apart.The reason that this doesn’t introduce any paradoxes when you simulate it in the quantum gate model is that Hamiltonians that give rise to anyons have quantum fault-tolerant codes hidden inside of them.

This request for clarification — asked of a first-rank physicist by a first-rank mathematician — calls to mind Vladimir Arnol’d’s lament “Every mathematician knows that it is impossible to understand any elementary course in thermodynamics.”

Similarly, from a mathematician’s point-of-view — and equivalently, from a computational simulationist’s point-of-view — what is “impossible to understand” about anyonic dynamics is *NOT* the topologically protected dynamical phases that are associated to braided trajectories of anyons, but rather, the mechanisms by which coarse-grained anyonic dynamics (by postulate) arises from fine-grained Maxwell-Dirac dynamics, and is (postulated to be) robust in respect to the renormalization dynamics associated to edges, sensors, defects, and vacuua.

Survey articles like Elliot Lieb’s “Quantum Mechanics, the Stability of Matter and Quantum Electrodynamics” (2004, arXiv:math-ph/0401004) can assist mathematicians toward an appreciation that

These considerations remind us that the restriction of quantum dynamics to QED flows can be problematical for mathematicians and experimentalists alike.

In particular, QED dynamics compels us accept renormalization effects and long-range large- gauge interactions — as manifest in flux tubes & image charges, for example — that can be severely problematic for quantum computation (as the experience of recent years has taught us). On the other hand, these same QED effects act generically to ease the task of PTIME simulation. Evidently the QED-restricted gauge-invariant long-range quantum dynamics that Nature imposes upon us has very special properties, that can either hurt us or help us, depending upon how we exploit them.

The wonderful articles of the

Royal Society’srecent theme issue “The New SI Based on Fundamental Constants” (Phil. Trans. R. Soc. A, October 28, 2011; 369 (1953)) can be read, with a mathematical eye, as the physics embodiment of André Weil’s paean (written to his sister, the philosopher Simone Weil)Similarly, the

Royal Society’svarious “New SI” articles show us that in regard toabeliananyonic dynamicsA great open question is: Can God similarly carry the day in regard to

nonabeliananyon dynamics, against a devil who restricts human technologies (and human physiologies) to QED dynamical flows? For example, are topologically protected QED-restricted nonabelian anyonic quantum memories feasible, both in principle and in practice?No one knows … and our present understanding is too feeble (as it seems to me) to foresee the answer with any great confidence.

One reasonable step toward resolving these questions — a step that is has not as yet been taken — is to mathematically naturalize the experimentally robust

of present-day QED-restrictedmises en pratiqueabelianmetrology triangles, with a practical view toward similarly naturalizing the (hopefully robust?)mises en pratiqueof future QED-restrictednonabelianmetrology triangles.ConclusionThere’s plenty of scope for Peter Shor and Gil Kalai to worktogetherin advancing our present far-from-complete and far-from-natural understanding of anyonic dynamics, both abelian and nonabelian.Needless to say, as a medical researcher I have a vested interested in ever-more-natural mathematical understanding of quantum dynamical systems, and ever-more-efficient simulation of them, and ever-more-robust

mises en pratiquefor observing-and-controlling such systems. This is because we can confidently anticipate that the 21st century’s transformational advances in quantum information theory and quantum computing will be accompanied by transformational advances in medical technology and healing. So let’s keep advancing!Gil,

How exactly do you get around my argument for initializing the state of the nonabelions?

I thought you had agreed that the braiding of well-separated nonabelions gives unitary operations that are topologically protected against errors (up to overall phases, anyway). You seem to have changed your mind about that, and are relying on your hypothesis (*) in a reply above, which as far as I am concerned is a non-argument.

Nick

Dear Nick,

The crux of the matter is the argument or non-argument (*). It is commonly accepted that quantum experimental processes that you run in the laboratory can be simulated on a qubit/gate quantum computer. Namely, you can describe your process by local quantum operations on some microscopic components describing your experiment. The extra ingredient for (*) is that if your experimental process itself does not involve an apparatus of quantum fault-tolerance (on the microscopic level that you use for its simulation) then you can imitate it (by tuning also the level of noise to fit the experiment) on the noisy quantum computer as well.

This is it.I think it is a good argument. I realize that you regard it as a non-argument. If you can say anything more that enlighten me I will be quite happy and will gladly change my mind if required.

(Regarding our discussion – actually, in this case I did not change my mind. I thought that you agree that TQC qubits cannot be initialized based on (*) and I parenthetically remarked that you cannot run a [unitary] quantum computation on (logically)-noisy qubits *EVEN* if your gates are noiseless. (*) as good as it may or may not be, applies to creating highly stable qubits, initially or anytime later)

Hi Gil:

If your gates are noiseless but you start with moderately noisy qubits, but not ones in a completely mixed state, you can indeed do quantum computation. You can run a unitary quantum computation that refrigerates a portion of your qubits at the cost of making the rest very noisy. You can then run your computation on the cold section of your qubits.

So if you can do everything noise free except initialization, this shouldn’t be an obstacle to quantum computers.

Gil:

Re our discussion—I never agreed to anything based on your (*) argument or similar. I thought I had explained that you can indeed initialize the nonabelion “qubits” in a fault-tolerant manner. (I agreed, and agree, that if you have faulty initial qubits, then even noiseless gates won’t help you.)

As far as I am concerned, simulating your experiment on any kind of other quantum device is beside the point. The question is what you can do in the actual device at hand. There is very little point in simulating the computation done by my perfect topological gates in a faulty non-topological quantum circuit. If it doesn’t work in the simulation, that tells us nothing about the original topological device. That is why your argument has always appeared to be begging the question.

Of course, if by “simulate” you mean constructing the topological phase in another system (perhaps in terms of qubits), possibly with a different (but still short-range) Hamiltonian but with a ground state in the same phase, then such a set-up should be just as fault tolerant as the original. (Indeed, people discuss exactly this a lot nowadays, with reference to cold atoms in optical lattices.) But in this point of view, either realization is as good as the other, so again I don’t see what you have learned from the “simulation” that you could not in principle have seen in the original.

In a way the point here is that when we say the TQC is inherently fault tolerant, we are not kidding. We mean that notwithstanding the coupling of e.g. charged electrons to the electromagnetic environment as in the fractional QH effect—so our systems do already contain “noise”—the topological degrees of freedom are stable against decoherence, and fault-tolerant gate operations and initialization are possible, up to exponentially-small corrections.

Dear Nick, I suppose we simply disagree. Perhaps the only point of agreement is when you say “either realization is as good as the other” as indeed the same weakness with TQC applies to other attempts to implement stable qubits based on topological phase that shortcut the bottom-up tedious quantum fault-tolerance approaches. Such shortcuts involve some extra miracle which represents not-sufficiently-detailed modeling of the noisy processes. Your systems do contain “noise” but not the “whole noise” (which will leave you with “nothing but noise”).

Gil,

Please just tell us how errors can occur when we braid well-separated nonabelions adiabatically.

Nick

Nick, as you see from the top of this comment this question is on my mind and I will certainly try to come back to it (but probably not in real-time). In the meantime, here is one to you:

How low-quality should the anyonic qubits be to start with before your belief in noiseless braiding gates will shatter (and why). Or is it the case that the slightest sign of an

anyonic behavior that deviates from a maximal entropy state will suffice to manifest the wonderful noiseless braided gate phenomenon (and thus to apply your purification argument).

Hi Gil,

The refrigeration algorithm works as long as (1) the gates are noiseless and (2) the initial state is not completely mixed. You just need more qubits to start with to produce the same number of working qubits after cooling.

There’s a nice classical explanation for this. Suppose we start with

n. Process them three bits at a time. Do a reversible computation that puts the majority of the three bits in the first bit, and encodes which bit is different in the remaining 2. If the bits started out i.i.d. with a bias toward 0, the entropy of the first bit has decreased, and the entropy of the other 2 has increased. Repeat, segregating the lower-entropy bits from the higher-entropy bits. Eventually, you will get nearly all your entropy in the lastkbits, and have the firstn−kbits nearly all 0.Gil, I’ll take the last question first. We’re talking about initializing a set of “topological qubits” so that we have all 0’s. (I think Peter is talking about error-correction during the computation, a different question.) I’m not sure what entropy you mean. My previous discussion was for a state that was pure but in a superposition. If you want to start with a mixed state (density matrix), we can do that also. In fact, you can take the density matrix of maximum von Neumann entropy, which is the equal weight statistical mixture of all the basis states (bit strings). Then it is easy to see that measuring each topo qubit, we will find about 50% in the 0 state, with \sqrt{N} fluctuations (for N topo qubits). So if you want some number M of qubits in 0 state initially, you would need to start with a little more than 2M qubits in this case.

To “shatter my belief” in noiseless gates you would need to demonstrate that the braiding of two nonabelions does not produce the ideal unitary asymptotically, up to corrections (or error rates) that are exponentially small in a) separation, b) 1/speed of braid, and c) 1/temperature, in the limit as all these quantities go to infinity. If you have braiding obeying the latter properties, you can do both initialization and computation.

Nick

Hi Nick,

No, I’m talking about the same problem as you are: initialization. I’m just trying to explain a different way of doing it (computational refrigeration).

Peter, thanks, yes, I see now. I had basically overlooked one of your posts dated March 31.

Peter,

Perhaps for improving that ratio you should also have added (3) exchange the anyons more slowly.

Dear Nick and Peter, thanks for your many enlightening comments. I will relate to some of your points a bit later. Thanks also Craig, David and Aram.

Dear Nick and Peter,

1) Peter: “The progress on creating anyonic materials is proceeding at a rate that within a decade or so I expect people may be able to control these materials reasonably well.”

Certainly this is a project worth pursuing. When we talk about gates/qubits with full fledged quantum fault-tolerance the difficulties in controlling the system are modeled in the basic noise/errors models. (And the debate is about the modeling.) For proposals to shortcut the traditional qubit/gates or parts of it, the “ability to control” is not part of the modeling. This is why I consider such “shortcuts” proposals as depending more on some “hidden miracle.” Difficulties in the “ability to control” may reflect engineering issues as well as basic physical obstacles.

2) My simulation argument asserts that if you have such a shortcut you should be able to demonstrate the detailed/microscopic description of your experimental process on a noisy quantum computer without quantum fault-tolerance. This suggests that a detailed modeling on the proposed experimental process (even with ordinary modeling of noise) may be in conflict with the experimental hopes. But, of course, the argument does not replace the need for such a detailed modeling.

Here are Nick’s and Peter’s responses.

Nick: “As far as I am concerned, simulating your experiment on any kind of other quantum device is beside the point. The question is what you can do in the actual device at hand. There is very little point in simulating the computation done by my perfect topological gates in a faulty non-topological quantum circuit. If it doesn’t work in the simulation, that tells us nothing about the original topological device. That is why your argument has always appeared to be begging the question.”

Peter: “The reason that this doesn’t introduce any paradoxes when you simulate it in the quantum gate model is that Hamiltonians that give rise to anyons have quantum fault-tolerant codes hidden inside of them.”

I don’t agree with Nick that this is beside the point. What Peter says is not impossible but seems implausible. The simulation argument suggests that creating Hamiltonian with hidden quantum fault-tolerance already requires quantum fault-tolerance on some finer level. I cannot exclude the possibility of some natural hidden quantum fault-tolerance resource but it looks unlikely.

3) Nick: “To “shatter my belief” in noiseless gates you would need to demonstrate that the braiding of two nonabelions does not produce the ideal unitary asymptotically, up to corrections (or error rates) that are exponentially small in a) separation, b) 1/speed of braid, and c) 1/temperature, in the limit as all these quantities go to infinity. If you have braiding obeying the latter properties, you can do both initialization and computation.”

Nick, what you describe is more or less a “threshold-like theorem,” which I have no reasons to disagree with. The basic alternative to the threshold-theorem view of the quantum world is simply that the rate of noise scales up for complicated quantum evolutions. Also here, the alternative is simply that you cannot achieve the requirements that you listed for noiseless braid-based gates. (Reminder: for me noiseless= exponentially small noise.)

4) Initialization: A demonstration of (almost) noiseless qubit at a presecibed superposition would mean crossing the “fault-tolerance barrier,” and thus the failure of my conjectures. The nice refrigiration argument of Nick and Steve indeed asserts that with noiseless gates and noiseless measurements you can start with anyonic qubit with arbitrary small deviation from the maximal entropy state and refrigirate it to a qubit at a (almost) pure presecribed state. If you are seriously not kidding about this, Nick, I think it is up to you to look here skeptically and examine if for realistic implementations we have a case of unrealistic assumptions begging an unrealistic conclusion.

5) It is certainly an exciting possibility that nonabelions cannot be constructed at all but it goes beyond the assertion that anyonic qubits cannot be better than other types of qubits. (This question is related to the second part of my talk in the TQC video.) Impossibility of “deep” quantum computation (or even if deep quantum processes are not witnessed in, say, condensed matter physics,) would imply strong conservation laws for topological phases and may support such a possibility.

I cannot help but feel that this line “the rate of noise scales up for complicated quantum evolutions” is attempting to present an alternative as simple when it is really anything but. How does the system know its evolution is complicated? What does complicated mean? Calling this alternative “basic” seems to me crazy when still no one can write down a local Hamiltonian that generates it.

Dear Aram,

I cannot help but feel that this line “the rate of noise scales up for complicated quantum evolutions” is attempting to present an alternative as simple when it is really anything but.Aram, the bottom line is simple, fortunately there is more to be said.

How does the system know its evolution is complicated?This question reflects what I refer to as the “trivial flaw.” The question takes for granted a reality with quantum computers and a causality structure of universal quantum devices. In our present reality a system is built to carry out an evolution (or a class of evolutions), the errors and risks depend on the quantum device, and the device depends on the evolution.

What does complicated mean?This is, of course, a good question and the answer is more involved than the very simple bottom line that you quoted. For systems attempting to implement qubits/gates quantum computers “complicated” can be thought of as “very entangled” in certain technical terms that can be found in my papers.

Calling this alternative “basic” seems to me crazy when still no one can write down a local Hamiltonian that generates it.This represents another “blind spot” in your perception, Aram. You expect an explicit description in term of local Hamiltonians and I expect (and largely propose) implicit description in terms of conditions connecting the evolution and the noise. (Or, in other words, implicit conditions for the noisy perturbations of unitary evolutions.)

Ultimately, for noisy quantum systems that obeys my implicit conditions the noise can be described in terms of local Hamiltonians and there will be severe locality conditions also on the evolution itself. But note that my implicit conditions extend even to cases where “locality” has no meaning.

Implicit mathematical descriptions in terms of conditions or equations are very basic in mathematical modeling. (This last issue is similar to those I had with Scott Aaronson on the matter of “Navier-Stokes computers”, see this remark and this one.)

If you are interested and have some time, watch the videotaped lectures!

I guess we’ve been over some of this ground before, but controllable systems are usually capable of both complicated & simple evolutions. Do such systems experience unusually correlated noise for both types, or only in the complicated case? Either answer seems to lead to paradox.

It is true that I have a blind spot in that I am unable to understand your noise models apart from their conclusions (QCs don’t work). But are there any physicists who understand your noise models? Even ones who may agree with your conclusions, like Alicki, presumably get there in different ways.

Maybe what I am missing is that the conclusion “QC don’t work” is in fact the only premise? And the rest of your work is trying to explore the consequences of that?

“Maybe what I am missing is that the conclusion “QC don’t work” is in fact the only premise? And the rest of your work is trying to explore the consequences of that?” Dear Aram, to a large extent this is correct – see Video I 0:05-1:05 and Video II 1:01:50-1:03:44.My purpose is to offer an explanation for why quantum computers cannot work and drawing consequences from failure (or just absence) of quantum fault-tolerance is a large part of what I am doing. (When it comes to methods to shortcut fully or partially quantum fault-tolerance, there are stronger arguments for why they will not work.)

So if “quantum computers cannot work” is the premise of your theory, does your theory lead to any concrete predictions besides “quantum computers cannot work”? If it doesn’t, then I can’t help thinking that there really isn’t much substance to it.

The Kalai postulates, as formally modeled by

varietal dynamicshave led to concrete experiments, a concrete PhD thesis (coming this summer), concrete patent applications, and (most recently, and most gratifyingly) a concrete tenure-track faculty appointment for Joe Garbini and my graduate student Rio Picone … with no post-doc required.These are four concrete reasons why we “varietal engineers” have evolved considerable (and increasing) respect for the Kalai Postulates.

John, I agree that varietal dynamics are a powerful and useful simulation tool, but I have to point out that they are not what Gil is proposing. One thing that Gil is clear about is that he wants his conjectures to be compatible with the (conventional) Schrodinger equation.

Aram, for us engineers there is no operational difference between:

• Varietal dynamics that “blows up” to QED field theory, versus

• QED field theory that “pulls back” to varietal dynamics.

The former accepts a dynamical restriction, namely, “Quantum computing Hamiltonians are required to respect Maxwell-Dirac relativistic gauge invariance.” The latter accepts a geometric restriction, namely “The singularities of varietal state-spaces must be computationally and thermodynamically occult.”

It is entirely reasonable (as it seems to me) to suppose that the dynamical restriction is dual to the geometric restriction,

i.e.they are the *same* restriction.From this dual perspective, it is natural to regard the “M” in Ed Witten’s celebrated M-theory as standing for what

the Green Sheet seminar notescallmises en pratiquein engineering roles, as dual tola marée montantein math-and-physics roles.Here we can scarcely go wrong (as it seems to me) by following Grothendieck’s advice to Ronnie Brown:

Following Grothendieck’s advice, it is natural to pullback Gil Kalai’s postulates onto concrete conjectures like:

One nice aspect of this conjecture is that it respects the spirit of the Kalai Postulates (and even formally models them) without tackling divisive articles of faith regarding “The Church of the Larger Hilbert Space.” Instead, this conjecture directs our attention toward innumerable wonderful articles, ranging from Weis and von Klitzing’s ultra-concrete “Metrology and microscopic picture of the integer quantum Hall effect” (2011) to Herwig Hauser’s ultra-abstract “The Hironaka theorem on resolution of singularities (or: a proof we always wanted to understand)” (2002).

The main advantage that pretty much any modern-day researcher possesses — relative to ultra-talented 20th century polymaths like (

e.g.) Paul Dirac and John von Neumann — is that we can draw from a pool of math-and-physics literature that is wider and deeper than Dirac and von Neumann ever dreamed of. And so (as it seems to me) the more deeply we all draw from this pool, the more effectively Kalai/Harrow debate can fulfull its immense and even strategically and morally crucial potential.Doh! The second paragraph of the above comment transposed “former” and “latter”.

The

mainpoint is evident (I hope): the Kalai Postulates can be concretely reduced to Standard Dynamical Conjecture(s), that offer a path toward — that dangerous article — a reasoned, respectfulconsensusshared by Kalaists and the Harrowites, accompanied by a vigorous flow of creative research and productive enterprises. One hopes so, anyway!Hi Peter (and Aram and John)

Thanks, all, for the comments.

Peter, the premise of my theory is that

quantum fault-toleranceis not possible. (And, in any case, that “the fault-tolerance barrier” between quantum systems with quantum fault-tolerance and quantum system without quantum fault-tolerance is an important “phase transition” that deserved to be studied.) So, for me, the crux of the matter is quantum fault-tolerance/quantum error-correction and not quantum computing. Certainly I am not shy at all to make concrete, non-concrete, and vague predictions based on this premise and to offer various connections, applications, and speculations based on it. I am a bit off-line these days due to our Passover holiday but I will try to gather some of these predictions and post them later. I certainly think that “no quantum-fault-tolerance” is by itself an important principle related to many important issues. (Of course, it is a fairly common skeptical view that attempts to build quantum computers will fail but we will not learn anything new from it. I disagree and refer to this point for a few minutes in Video II from 12:17.)Three remarks for now:

1) Some of the substance of my point of view comes from very simple claims based on a very simple logic

Often people think that this claim represents some unlikely conspiracy because they make the mistake of taking for granted the causality structure imposed by quantum computers. (This leads to the question that Aram asked here in the thread “how does the system ‘knows’ that the evolution is complicated.”)

Often, people require that my conjectures should be supported by an explicit ” ‘microscopic’ model of noise,” while I find implicit mathematical conditions perfectly adequate.

These two claims represent very simple logic. Still, while they do not involve cuspidal representations or derived functors, to the best of my judgement, there is substance to these claims even if they were incorrect. (But they are both correct.) In any case, I find it interesting to discuss them.

2) If quantum fault-tolerance is impossible (or

whenquantum fault-tolerance is absent for whatever reason) then I would expect that efficient classical approximations are possible. I did not study specifically the connections between varietal dynamics that John (and Aram) discussed and my conjectures/point of view, but this can certainly be very interesting.3) A slogan (and an analogy) which describe my point of view is that “

the importance of quantum fault-tolerance to physics is similar to the importance of nondeterministic computation in the theory of computation – their importance is that they cannot be achieved.” As thought experiments 1) What would beyourpredictions based on a principle of “no quantum fault-tolerance?” 2) Consider a hypothetical scenario where for a few decades people were trying to find efficient algorithms for NP-complete problems believing that this is possible. What could be then the additional predictions coming from a point of view that this is simply impossible.Hi Gil,

Let me try to explain what I mean when I say your theory doesn’t make any predictions.

It seems to me that anyons lead to fault-tolerant quantum computation easily enough that the impossibility of fault-tolerant quantum computation must mean that anyonic materials behave very differently from what quantum mechanics predicts.

However, your thinking seems to be that anyonic materials might work exactly the same as quantum mechanics predicts they will, except for the fact that, if anybody actually manages to control the anyons well enough to do quantum computation, the computation will fail for mysterious reasons.

So the only thing that you are predicting from the assumption of no fault-tolerant quantum computation is that quantum computation won’t work.

Hi Peter,

Thanks for the good comment. Your comment consists of a paragraph on your view and continues with a few sentences on my thinking. Let me respond in two stages. First let me rephrase what you wrote for the bosonic case (which I studied in more details), and then I will move back to anyons.

The bosonic analog(A’) It seems to me that bosons lead easily enough to BosonSampling which demonstrates “quantum supremacy, ” so the impossibility of (computationally superior) BosonSampling must mean that bosonic materials behave very differently from what quantum mechanics predicts.

Yes, realistic (noisy) bosonic states behave very differently from what quantum mechanics predicts for noiseless bosonic states.

(B’) However, your thinking seems to be that bosonic materials might work exactly the same as quantum mechanics predicts they will, except for the fact that, if anybody actually manages to control the bosons well enough to demonstrate (superior) quantum computation, the computation will fail for mysterious reasons.

I will rephrase the “my thinking” part in a stronger and simpler way as follows:

(C’) Nobody (nature included) can control bosonic states “well enough” to demonstrate superior quantum computation.

For the bosonic case (more precisely for states based on random gaussian matrices) I am working out the situation with Guy Kindler and our work suggests the following more concrete predictions.

(P’1)

[noise-sensitivity]Even for small amount of noise in the creation process of the bosonic state there will be a very large difference (in terms of -norm) in the behavior of the ideal bosonic state and the noisy bosonic state.(P’2)

[small support]Realistic (noisy) bosonic states are supported by low degree Hermite-polynomials.(P’2) means that for a fixed amount of noise the noisy versions have polynomial-time approximation.

Back to anyons(A) “It seems to me that anyons lead to fault-tolerant quantum computation easily enough that the impossibility of fault-tolerant quantum computation must mean that anyonic materials behave very differently from what quantum mechanics predicts.”

Yes, like in the case of bosonic states small amount of noise in the process leading to “anyonic materials” will cause them to behave very differently than the ideal model. Both these behaviors are consistent with “what quantum mechanics predicts.”

(B) However, your thinking seems to be that anyonic materials might work exactly the same as quantum mechanics predicts they will, except for the fact that, if anybody actually manages to control the anyons well enough to do quantum computation, the computation will fail for mysterious reasons.

My thinking is that

(C) N

obody(nature included) can “control” the anyons well enough to demonstrate quantum-fault tolerance and do quantum computing.((C) is both stronger and clearer than (B).)

Of course, you may ask, why don’t I have as concrete predictions for the anyonic case as I have for the bosonic case. I can make a few excuses: the mathematics and physics are deeper and more complicated. The implementation is more mysterious, for example, I am not sure what precisely you refer to as “anyonic materials.” Is it a hypothetical theoretical gadget, Peter, or do you refer really to some materials? Do you regard the implementation of surface code based on superconducting qubits an “anyonic material?” And finally, I simply did not study the anyonic case in detail.

We can expect to have predictions which are similar to (P’1) and (P’2) in the bosonic case, namely:

(P1)

[Noise-sensitivity]Small amount of noise in the creation process of anyonic states leads to very large difference between the noisy states and the ideal states.(P2)

[Low-degree support]Realistic noisy anyonic states are supported by “low degree” representations and have polynomial-time approximations.But unlike the bosonic state I do not have results supporting these predictions, and frankly I don’t understand the representation theory well enough to state (P2) in precise mathematical terms, so this is left to be explored.

As I said earlier, the

simulation heuristic(which I regard as strong while not ironclad) applies both to anyonic and bosonic computation. (The application for anyons is stronger and more direct.) The simulation heuristics gives a clear prediction that anyonic-based qubits are noisy.It is instructive and fun (and perhaps even practically useful) to consider

restrictionsof the above two statements, as follows:The point is that it may not be neither computationally necessary, nor computationally feasible, nor even computationally desirable to embrace the goals that some crusaders advocate:

Rather, it may be preferable — in regard to advancing mathematics, science, engineering (and even medicine) — to instead progressively raise tide of RKC science to submerge the most lustrous crown jewels of quantum physics, that is: (1) abelian quantum metrology triangles of invariant reproducibility and unbounded precision, followed by (2)

nonabelianmetrology triangles,alsoof invariant reproducibility and unbounded precision.Relative to the 2003-4 QIST Roadmap — which regrettably has neither been peer-assessed nor updated — the RKC/RKP

dessein du marée monteeis (as it seems to me) more mathematically natural, more experimentally verifiable, of greater engineering consequence, more stimulating of enterprise, more feasible in scope, more incremental in execution, more readily initiated, more objectively assessable, more respectful of urgent strategic and humanitarian requirements, more compatible with international agreements of long standing, and (most important of all) more inspirational to young researchers.Needless to say, in this regard it is neither necessary, nor feasible, nor desirable that everyone think alike. Indeed the RKC/RKP

dessein du marée monteealready is well-underway, and is immersing quantum physics in a rising global appreciation that the Kalai Postulates, for at least the next few decades, may pragmatically be regarded as beingeffectivelycorrect.Grammatical NoteThe mavens of theFrench Language StackExchangeaver that if the meaning to be conveyed is the English-language noun-phrase “a rising tide of mathematical understanding, regarded as a formalized research programme”, and the starting French-language phrase is Serre’s“méthode ‘marée montante'”, that concluded Serre’s celebrated (among mathematicians)correspondence with Grothendieck,then an appropriate French idiom is“dessein de la marée montante“.In the language of Serre and Grothendieck, the QIST Roadmap can be appreciated as

le dessein du marteau et du burin(“the programme of the hammer and chisel”), whereas RKC/RKP varietal dynamics can be appreciated asle dessein de la marée montante(“the programme of the rising varietal tide”),Needless to say, in quantum information theory as in algebraic geometry, it’s reasonable to purse

bothapproaches, as history shows us plainly, and common sense affirms, that each strengthens the other.Hi Peter and John,

I have two additional remarks. First, Peter, it is not the case that my predictions (conjectures) come to play just when somebody tries to have quantum computing. In fact, I make an effort to offer predictions that can be tested well before we reach the stage of quantum computers, e.g., predictions for two (physical) qubits and for a single (encoded) qubit as well as predictions that apply to much more general quantum systems without even the tensor product structure. (There is an issue that I do not specify the constants and that some of the predictions are formulated in an asymptotic manner. I expect that the constants will be such that the predictions will come to play for quite small systems.) For anyons my prediction is simply that the mass gap allowing quantum computation cannot be approached. In the related attempts to create very stable qubits based on surface code, I mentioned various times that I expect that my conjectures will come into play in a substantial way already for the proposal for distance-3 codes involving 20 qubits or so. (This may be examined even via detailed simulations.) My prediction is that we cannot create up to small errors encoded qubits (where “qubit” refers to an encoded state with an arbitrary prescribed superposition).

As a matter of fact, some of my conjectures and, in particular, the issue of correlated errors for entangled qubits (which I am sure people are interested in, anyhow) can be tested already for the breakthrough experiments with just five qubits by Martinis at als reported in this paper: (Again, both for the real experimental level and the detailed simulation level). What is needed is to understand the probabilities of simultaneous k-qubits failures for sets of

k-qubitsk=2,3,4,5.) Of course, this is much more easily said than done, and we need to formalize what we want to test more precisely, but maybe it can be done already with the existing experimental apparatus. (I would expect a strong positive correlation which will lead to high probability for “synchronized errors.”)Second, John, in connection with the varietal dynamics that you mentioned several times, and approximations based on low-dimensional algebraic varieties in a high dimensional Hilbert space, let me note that BosonSampling by itself is a process to reach a bosonic states which belong to a small algebraic variety of high dimensional Hilbert space. Namely, we consider the variety of decomposable degree

nsymmetric tensors withmvariables (of dimensionnmor so) inside the Hilbert space of all degree n symmetric tensors with m variables of dimension . E.g., forn=10 andm=20 we consider the 200-dimensional algebraic variety (of decomposable symmetric tensors parametrized by 10 by 20 complex matrices) inside a 20,000,000 dimensional Hilbert space (symmetric tensors). (But for 3 bosons the dimension of the variety is only roughly half that of that of the Hilbert space.) The ability of our experimental process to “hit” closely this low-dimensional variety is already a reason for concern. The mere fact that we have a small-dimensional variety does not imply that polynomial-time approximations are possible. But it is possible that, in every such situation, small-degree polynomials in the the Tangent space to the variety allows already good approximation for realistic noisy quantum systems which are approximately supported in such a variety (or approach it).Gil, that is an illuminating observation!

Indeed, the generic experience of simulationists is that the low-dimensional varietal submanifolds immerse in Hilbert-space as

a space-filling “foam”that supports the flow of noisy/Lindblad trajectories — and thus their PTIME simulation as Carmichael unravelings — while rigorously respecting the various conservation laws, thermodynamic laws, and gauge invariances that are the “crown jewels” of 21st century quantum dynamics.Quantum research threads that concretely are “flowing” in this dynamical/geometric direction include (for example) Hamiltonian lattice gauge theories; see for example Buyens, Haegeman, van Acoleyen, Verschelde, and Verstraete, “Matrix Product States for Gauge Field Theories” (arXiv:1312.6654) and Bañuls, Cichy, Cirac, Jansen, Saito, “Matrix Product States for Lattice Field Theories” (arXiv:1310.4118).

Can these algebraic/geometric/varietal simulation methods help to accelerate the experimental observation of nonabelian anyonic field-dynamics? Can these methods similarly be adapted to help optimize BosonSampling

mises en pratique? Can these methods contribute to the rising tide of everyday technologies that press against quantum limits? Can the researchers of any young person, who is equipped with a laptop computer, an internet connection, and an inquiring mind, be lifted by thismarée montante?The evident answers to these questions are (as it seems to me and many, and by the evidence of the recent literature) yes, yes, yes, and yes.

Gil, in light of http://www.kurzweilai.net/a-record-quantum-entanglement-103-dimensions (see Zeilinger’s group’s paper that the article references), do you think that the errors would scale up with the entanglement of multiple modes of individual photons the same way you do with respect to individual qubits?

Dear Michael, thanks for the link. Indeed this is an impressive achievement. Also thanks for the excellent question. We can separate the question into two: The first: how will errors scale up for a single boson depending on the number of modes? The second: fixing the amount of errors for a single boson how will errors scale up with the number of bosons. I don’t have much to say on the first question except on noting that the answer may depend on the bosonic state (at least in our pre-quantum computer era). Gaussian states are of special interest, of course. I have a few things to say (and I have already even said a few conflicting things) on the second question and with Guy Kindler we are struggling to relate it to BKS-noise-sensitivity. For the second question I expect that errors scale up linearly with the number of bosons (or more precisely, that the threshold for doing something useful in terms of one-boson-error scales like 1 over the number of bosons).

The simulation heuristic that I discussed with Nick and Peter can tell us something in this case as well. For example, we can ask how errors scale up for a noisy quantum computer without fault-tolerance with noise tuned so that we can create a single Gaussian boson state with

mmodes with a fixed amout of noise, when we move ton-bosons states. This gives some challenge (less direct compared to anyonic qubits) for proponents of BosonSampling but, in this case, there are direct and explicit ways to go about it. (As I said, the simulation heuristic does not replace the need for detailed modelling.)Thank you for that link, Michael.

From the viewpoint of varietal dynamics, the Zeilinger’s group’s work raises the geometrically/algebraically natural question: “What is the lowest-rank varietal manifold, immersed in Hilbert space, upon which pulled-back dynamical trajectories can Lindbladian/Carmichael unravel to yield the observed density matrix?”

As nearly as I can tell, this is

nota question that the Zeilingeret alpaper attempts to answer, for the very good reason, that (at present) no-one knows how to answer such questions (rigorously *or* empirically) .And yet, a large (and rapidly increasing) body of practical quantum simulation experience — also compressed sampling experience, tensor network experience,

quantum measurement triangle(QMT) experience, etc. — suggests that the required varietal rank ubiquitously is very much less than linear-projection arguments would suggest.Why is this? No one knows.

SummaryComputational simulations increasingly suggest that exponentially more dimensions are required of Hilbert dynamical spaces than of varietal dynamical spaces.CorollaryVarietal dynamical systems comprise (reasonably realistic, reasonably concrete) formal models for the Kalai postulates.To make this algebraic/geometric point another way, if we ask ourselves “What might a Sure/Shor separator (in the sense of arXiv:quant-ph/0507242) look like

concretely?“, one answer might be “A varietal dynamical simulation framework, requiring PTIME computational resources, that accurately reproduced the entangled-photon dataset of Kren et al. (arXiv:1306.0096), and moreover accurately simulated too the (far more difficult to achieve) nine-decimal-place accuracy of next-generation quantum metrology triangles (QMTs, arXiv:1204.6500).”Philosophers might debate whether a varietal dynamic simulation (with its restricted state-space) was a Sure/Shor separator

sensu stricto, and yet such a dynamical framework would be — in the literal sense of a centuries-old engineering joke — “close enough for all practical purposes.” :)These pragmatic considerations are why we engineers are very accepting of the Kalai Postulates.

Hey Gil, this looks interesting: http://arxiv.org/abs/1404.3214

It is written: “Let me quote what Steve wrote about the videos: The surprising part is the superior production value relative to your typical videotaped lecture (at least for the first overview video). ”

However what Steve Flammia actually wrote about the videos was:

“The surprising part is the superior production value relative to your typical videotaped lecture (at least for the first overview video).

I think the high gloss on these videos has the potential to sway low-information bystanders into thinking that there really is a debate about whether quantum computing is possible in principle. So let me be clear.

There is no debate! The expert consensus on the evidence is that large-scale quantum computation is possible in principle.”

Readers of Gil Kalai’s

Combinatorics and Moremay perhaps enjoy Jaeyoon Cho’s introduction remarks in his preprint “Entanglement area law in thermodynamically gapped spin systems” (arXiv:1404.7616, this week):From the perspective that Cho’s preprint offers, Gil Kalai’s postulates can be viewed as (the beginnings of) an “area law” for correlated noise dynamics; similarly varietal dynamics concerns itself with the geometric and algebraic structures that are supported on Cho’s “measure-zero set” of Hilbert states.

ConclusionGil’s lecture-title “Why Quantum Computers Cannot Work: The Movie!” is enjoyably provocative, and yet (as it seems to me) it is Jaeyoon Cho’s perspective that focuses our attention upon those aspects of Gil’s postulates that are the most mathematically and scientifically interesting (in the short run) and the most economically and strategically important (in the long run).Alexander Vlasov raised an excellent point about noise, fault-tolerance, and the extended Church-Turing thesis (ECT). (In this comment, this one and the follow up just below it.) Let me rephrase what Alexander is saying:

We have a system consisting of a quantum computer + noise. This combined system is not less powerfull than the quantum computer alone. So if the quantum computer disprove ECT some *additional* effect may not restore ECT!

Alexander mentioned the analogy with horsepowers: If your car’s engine has horsepowers but by some imperfection you only uses 100 horsepowers, it still has horsepowers!

He offered the following analogy/thought-experiment: “Someone has working quantum computer, but after factoring test the results are encoding using standard method with addition of random numbers. So nobody may use results (without knowledge of the random numbers), but we may not say that addition of the noise reduces power of the quantum computer itself.”

This is a good food for thought!

I am not an expert in the topic so apologies if my question is naive. After watching the videos there is a point that bothers me regarding your response. You use the fact that the system is fixed to argue against various points raised (which gels like super determinism but that is a side issue), what bothers me is that you assume that the system being fixed allows an explaination for fantastic things to happen. Fixing the system and not being a general purpose computer may help explain why those fantastic events do not interfere with our understanding of physics, however it does not explain how they can happen computationally. Let me give an example: a classical system is deterministic, so it is not against the law of physics if an adversary flips a particular bit of memory during the computation to change the result. The shagreen being fixed the theoretical complexity to know which bit to flip is trivial. However, in practice if an adversary wants to find the bit to flip it can be very nontrivial. Of course you can say that this is an intuition coming from the trivial flaw and looking at the uniformity of these fantastic events with respect to the systems is misleading me to think this way. However to me this seems to border begging the question, i.e. you assume that everything we know from uniform setting and general propose compassion is misleading so there cannot be a general purpose computer. All of our intuitions about computation in practice comes from uniform setting and you seem to say none of them is valid.

Hi Reader, this is a very good question but let me try to rephrase it just to see that I understand you.

Your points are:

1) The claim that a special purpose device (or what you refer to as “the system being fixed”) allows certain “fantastic” forms of noise to be consistent with the laws of physics does not mean that such fantastic events actually happen!

2) I assume that everything we know for general-purpose computers is misleading. But the fact of the matter is that all of our intuitions about computation in practice come from uniform (= “general purpose” or “universal”) setting and I seem to say that none of them are valid.

Is it a fair description?

(I was away on a vacation and I still “owe” (perhaps mainly to myself) a few answers. So I will get to them a bit later.)

Yes, you said it much better than I did.

Let me quickly relate to the points raised by “Reader.” To the second point: it is indeed misleading to assume that everything we know about classical general-purpose computation extends to quantum general-purpose computation.

There are several things to be said regarding the first point. The modeling of noise I propose is not “fantastic.” It seems fantastic if either you take general-purpose quantum computers for granted, or if you take for granted hypothetical quantum evolutions that cannot be realized at all. It is certainly correct that some of my proposals are in conflict with people’s intuitions. For example, I propose that the rate of noise (or the chance of failure) scales up with the complexity of the quantum evolution. This is a fairly mundane and unsurprising proposal unless you take universal quantum computers for granted.

There is one thing I have never understood in your arguments. A classical computer has a bunch of internal switches (transistors, diodes, etc) and a bunch of external controls (switches, LEDs, etc). A quantum computer has a bunch of internal switches (e.g. always-on two-qubit interactions) and a bunch of external switches (e.g. classically-controllable laser pulses, photodetectors). Why do you describe quantum computers as special-purpose and classical computers as general-purpose? What would a special-purpose classical computer look like? A player piano? A computer with mostly ROM and very little RAM, like a CD player? The term “special purpose” seems important to your intuition, but I still do not understand it well enough to be able to re-explain it in my own terms.

Re what Aram wrote:

Aren’t all real classical computer special purpose (e.g. because of finiteness and physical limitations on memory, etc.)? There is no universal computer in reality, it is just a theoretical model, or are there? E.g. do you consider a MacBook a universal computer or a special purpose computer? Are we looking for a universal quantum computer or something like a quantum device that a universal quantum computer is a good model of its behaviour, the same way Turing machines are good models of MacBook computers?

Re 1: My trouble is that “fantastic” things do look really fantastic, and even if we talk about a fixed non-universal machine. OK, the machine is fixed, so _theoretically_ those things can be computed easily. The same way the halting problem for any fixed Turing machine is trivially computable. But this doesn’t mean actually taking place. I can give you a fixed machine and both of us can agree that the halting problem for this machine is decidable. However we may not be able to know whether it halts or not. That is the same issue I have with the fantastic things. They are in principle easy computationally, but I think in reality they have to computed somehow. Let’s say that the noise for your model is exponentially hard to compute classically if we give the description of the special purpose quantum machine as input. If we fix the machine then the noise can be much easier. However this easiness looks similar to ease of deciding the halting of a fixed Turing machine. In practice there are Turing machines that we can never decide if they halt or not. Same with the noise model. Yes, the description of the machine fixes the noise and makes it easy theoretically, as the description of a Turing machine fixes the answer to the question if it is halting. Why the noise for every special purpose machine can be computed easily in practice?

Re 2: My trouble is that I don’t see strong support for the claim that our intuition from classical computers do not apply to quantum computers. In other words, why do you think so? It is not just our intuition about theoretical general-purpose classical machines, it is also our intuition based on practical experience with special-purpose classical machines.

Thanks again and apologies if my questions have trivial answers or don’t make sense because of some basic misunderstanding or confusion.

Aram, perhaps the following terminology will be less confusing: a “quantum device” is a device capable of creating a certain quantum evolution (or a family of quantum evolutions); a quantum computer is a quantum device capable of achieving universal quantum computing.

(We can also relax a bit and refer to a quantum computer as a quantum device capable of achieving computationally-superior quantum computing even if not universal.)

I referred to a quantum device as “a special-purpose quantum device” and to quantum computers in the strict sense as “universal” or “general-purpose” quantum devices, but somehow this terminology confuses you.

I guess the terminology confuses me because the distinction does not seem fundamental, and does not seem to be located in the machine. If a laser has a knob which lets me control its power, then the machine is a “general-purpose” device, but if someone superglues the knob into a fixed position, then it is a “special-purpose” device? Trying to think about border cases rapidly leads me into more and more confusion over these categories.

Aram, I still don’t understand what confuses you. Maybe try to share some more “border cases”.

Aram, so let me try to break up the argument you refer to into small pieces and let me know where you do not agree or do not understand.

1) Quantum computers are hypothetical objects.

2) Quantum devices are not hypothetical: there are plenty of both natural and human-made quantum devices.

3) The behavior of noise for quantum devices may tell us, in principle, if quantum computers are at all possible.

4) For a quantum device it is possible that the noise will depend on the evolution carried by the device.

5) It is possible, in principle, that such a putative dependence over all quantum devices will be systematic.

6) For example, it is possible, in principle, that for all quantum devices the rate of noise will scale up for complicated evolutions. (For some definition of “complicated.”)

7) It is possible, in principle, that such a dependence will imply that quantum computers are impossible.

1. agree, especially if you add adjectives like “large-scale” and “stronger than classical”. I would say that small quantum computers exist.

2-3. agree

4. These terms, like “noise” and “evolution carried”, are imprecise, and without defining them more precisely it’s hard for me to answer. In particular, “noise” refers to unwanted evolution, but this is of course something not objectively defined. I think it will be hard to state this in an observer-independent way.

5-6. Again ill-defined, and here I am deeply skeptical anything will work. Note that these could be true for vacuous reasons, like defining “complicated” to mean “noisy.”

7. sure. :)

Dear Aram,

1) (Also relevant to your earlier comment, and reader’s last comment.) We can define a quantum computer as a quantum device capable of performing long computation on

mqubits (mfixed) with a universal collection of gates, and with negligible errors. With this definition (even for the minimalm=2) quantum computers are hypothetical, and it is widely believed that implementing quantum fault-tolerance (which is also hypothetical) is necessary for building them.2) Your claims that noise is “subjective,” or “ill-defined,” or “not-fundamental,” are interesting, and, overall, I disagree with you. But it is mainly irrelevant here. The threshold theorem for quantum fault-tolerance is based on a certain modeling for noise, and I propose an alternative way for modeling noise.

3) Discussing quantum devices rather than quantum computers allows more flexibility in the way the noise may depend on the “intended” (or “approximated”) evolution. Both in the ordinary modeling of quantum noise and in my modeling we consider a noisy approximation (or perturbation) to a unitary (“intended” or “ideal”) evolution.

4) Of course, when we consider a specific suggestion on how the errors (or noise) depend on the “ideal evolution” we need to make sure that the dependence will be invariant for small changes of the “ideal” evolution itself. If you look at the specific conjectures presecribing my “detrimental noise” you can see that this is indeed the case.

This applies, for example, for the very specific conjecture asserting that when two qubits have large entanglement then the leak of information for them has substantial positive correlation, also for the conjecture that the rate of noise in a time-interval is lower-bounded by a measure of non-commutativity of the “ideal” evolution in that time-interval, and also for the smoothed-Lindblad-evolution description.

I think Aram’s question is: how does Nature know the “intended” evolution for a quantum device? Your framework seems to treat “intended evolution” differently than “noise”. In standard quantum mechanics, you don’t need to know the purpose that the device’s builders meant it to be used for in order to figure out how it evolves. Will the evolution your framework gives with a device has intended evolution

U, and a small amount of noise be different from a device with intended evolutionU’and a small amount of noise? IfUis nearlyU’, the difference between them could be viewed as noise. How can Nature know which of these is the “intended evolution”?Dear Peter,

You asked: “how does Nature know the “intended” evolution for a quantum device?”

There are two relevant levels for this question. The first refers to the conceptual framework and the second refers to the mathematical modeling. Let us restrict ourselves for simplicity to devices that implement a quantum circuit and start with the conceptual framework.

We can consider

(*) a quantum computer that has the flexibility to perform approximately any sequence of gates on a set of (say five) qubits.

(**) A quantum device that can perform approximately a certain specific sequence of gates on a set of (say five) qubits.

Probably to implement (**) can be easier than to implement (*). (Of course, this is not always the case, implementing (**) by first creating a quantum computer (*) and then using super-glue to cripple it as Aram suggested above will not make it easier.).

For (**) the nature of approximation your device achieve may depend on the device. And the device may be very specific to the evolution (or a limited set of evolutions) it is capable of performing. So for devices of type (**) the way the approximation depends on the evolution can be more general than for devices of type (*). And the specific relations for type (**) devices between the approximation and the evolution potentially deems type (*) devices to be impossible.

Talking about “the purpose that the device’s builders meant it to be used” is just confusing. You have to talk instead about “what the device is doing”.

The second level is the level of mathematical modeling. Talking about a noisy evolution as a certain perturbation of a unitary evolution is not new. The type of modeling I propose requires some limitations on what we refer to as “noise,” as well as some consistency tests that we need to pass. Most importantly, as I explained in a previous comment, the conditions on the noise should be invariant to small changes in the “intended” unitary evolution.

“Will the evolution your framework gives with a device has intended evolution

U, and a small amount of noise be different from a device with intended evolutionU’and a small amount of noise? IfUis nearlyU’, the difference between them could be viewed as noise. How can Nature know which of these is the “intended evolution”?”In my setting we consider unitary evolutions where the added noise is described by a POVM-measurement. (Moving from a unitary to another unitary is not viewed as “a noise,” but the noise can represent moving from a unitary evolution to a mixture of near-by unitary evolutions.) Nature does not need to know what is the “intended evolution” since the conditions on the noise are

to small changes of the intended evolutions and can sometimes be describedinvariantswithout even referringto the intended evolution.(The mathematical price we need to pay is that the noise is described by

implicitmathematical conditions.)P.S. I notice that one can still make the conceptual objection even more abstract by claiming that in nature we just have a huge unitary evolution with no devices and no objective meaning to “noise”. As I mentioned to Aram I don’t think this point of view is very useful/relevant in the specific context of the “threshold theorem/quantum fault-tolerance”. But we can nevertheless think about this very abstract point of view. Then we are simply interested in restrictions of the huge evolution to a few degrees of freedom (i.e. to a small-dimensional Hilbert spaces). For this abstraction, my conjectures propose a description of cases where we have “close to unitary” evolutions on a small-dimensional Hilbert space and they asserts that if we describe this “close to unitary” evolution as a POVM-perturbation of a unitary evolution, then this perturbation satisfies various conditions. (And itwhich unitary evolution you choose for the description.)does not matter,Dear Gil,

You say

Do you actually have a proof of this statement (“it does not matter, which unitary evolution you choose for this description”), or do you just hope that your conjectures satisfy this invariance criterion?

Peter, yes. The way the conjectures are formulated make sure (or try hard to make sure) that a) they apply for a noisy evolution which well-approximate a unitary evolution, and b) they are stable under changing the “target” (or “intended”) unitary operation.

Here is the first version of my paper with Guy Kindler on BosonSampling and Noise sensitivity (also posted on my homepage and we will arxive it soon).

1) For a fixed amount of noise BosonSampling is in

P. Moreover, noisy BosonSampling can be approximated bybounded-depthpolynomial time computation.2) To have robust experimental outcomes in the presence of noise one need to specify the noise with exponentially size input. So all we have is an exponential-time algorithm with an exponential size input. So this demystified the claim of “quantum supremacy.”

3) When the noise level is or higher noise sensitivity makes it unlikely to expect robust outcomes at all.

What I am happy about the paper is that the Hermite-Fourier approach allows very concrete computation. I expect that our results kicks-in for small number of qubits (7-8) but this is something that is mainly left to experiments.

Dear Gil, I see the paper appears in arXiv? My congratulations!

My reasoning below is certainly not concerning BosionSampling and similar ideas there statistics is part of algorithm.

I also would like to point on Troyansky and Tishby [1996] work about problem with sensitivity (see reference 61 and short discussion in Scott’s work you are citing as [2]). I myself hear the presentation in 1996 and have a vague impression that it is very close to ideas of your work.

Dear Alexander, many thanks for your comment! I will certainly look carefully at the paper by Troyanski and Tishby. (Naftali Tishby is a colleague here at HUJI and a close friend since the mid 70s. We chatted briefly about BosonSampling and other physics “fadgets” potentially relevant to quantum computing some months ago.)

Dear Gil, I am glad that you know an author – I myself know about the interesting issue only due to rare proceedings of the conference. Frankly speaking, then I read first article about BosonSamping my initial impression was: “but how they avoid the problem mentioned by Troyanski and Tishby?” and I am still not completely understanding that even now.

Alexander Vlasov raised an excellent point about noise, fault-tolerance, and the extended Church-Turing thesis (ECT). (In this comment, this one and the follow up just below it.) Let me rephrase what Alexander is saying:

We have a system consisting of a quantum computer + noise. This combined system is not less powerful than the quantum computer alone. So if the quantum computer disprove ECT some *additional* effect may not restore ECT!

Alexander mentioned the analogy with horse-power: If your car’s engine has horsepower but by some imperfection you only uses 100 horse-power, it still has horsepower!

One idea which seems implicit in Alexander’s thinking is that nature has superior quantum computational powers (quantum supremacy, for short) even if noise does not allow us to make use of these powers. I think that this point of view is not correct for the following reasons. (The BosonSampling previous comment and paper exemplify these points quite well.)

1) Quantum computers are hypothetical and general quantum evolutions that quantum computers can perform are also hypothetical.

The -horsepower car is only in our imagination.

2) Noise sensitivity means that realistic noisy quantum systems depend on exponential input size.

When you or nature implement a state which seemingly requires exponential time computation(in some parameter , robust outcomes depends on exponentially many parameters in describing the noise.

3) It is unuseful and incorrect to think about mixed-state (say bosonic or anyonic states) evolutions as “really” representing some pure state evolution “known” to nature but unknown to us. (This is also related to Nick Read parenthetical comment displayed above. ) A main reason is that the expression of pure states and mixtures is not unique.

(Analogy: it is unuseful and probably incorrect to think that at any given moment there are two elephants in your office which, having opposite signs, cancel each other.)

In the car with horse-powers example, this means that if your imperfect car has 100 horsepower it is physically impossible to distinguish between a description of an imperfect horse-powered car and an imperfect 100 horsepower car and we should prefer the second explanation.

Of course the crucial hypothesis is

4) Realistic noisy quantum systems can be simulated in BPP.

Correct asymptotic modeling of realistic noisy quantum evolutions man-made or natural gives us an evolution that can be simulated efficiently on classical computers.

No, I did not make any claim about real computational power of Nature. I simply meant that idea about possibility restoration of ECT due to noise (found in your exchange elsewhere) seems wrong due to thought experiment I mentioned.

Dear Gil, I doubt, it could be accepted without additional explanations – I suppose, if we could see some car with 10^100 horse-power engine, we could very simply distinguish that from car with 100 horsepower after simple research – in some way we may distinguish ship with power plant and sailing ship even if they have the same speed and size, because we may see that the power plant is working and how the ship is using only tiny fraction of the power.

In the same way, if we are supposing usual model of quantum computer, we know that the model (“power plant”) may factor integers in principle and the model with noise is an analogue of slowly moving ship with power plant.

In such example your argumentation looks like idea, that the power plant is only illusion and, in fact, both ships are moving due to wind.

But such a case looks like you could suggest instead of usual model of quantum computation some model that may be simulated classically even without noise – then you indeed would say that due to noise we may not distinguish such models and to claim quantum supremacy.