Thanks, all, for the comments.

Peter, the premise of my theory is that* quantum fault-tolerance* is not possible. (And, in any case, that “the fault-tolerance barrier” between quantum systems with quantum fault-tolerance and quantum system without quantum fault-tolerance is an important “phase transition” that deserved to be studied.) So, for me, the crux of the matter is quantum fault-tolerance/quantum error-correction and not quantum computing. Certainly I am not shy at all to make concrete, non-concrete, and vague predictions based on this premise and to offer various connections, applications, and speculations based on it. I am a bit off-line these days due to our Passover holiday but I will try to gather some of these predictions and post them later. I certainly think that “no quantum-fault-tolerance” is by itself an important principle related to many important issues. (Of course, it is a fairly common skeptical view that attempts to build quantum computers will fail but we will not learn anything new from it. I disagree and refer to this point for a few minutes in Video II from 12:17.)

Three remarks for now:

1) Some of the substance of my point of view comes from very simple claims based on a very simple logic

a) A viable alternative to the common point of view regarding quantum noise is that, simply, the noise scales up for complicated quantum evolutions.

Often people think that this claim represents some unlikely conspiracy because they make the mistake of taking for granted the causality structure imposed by quantum computers. (This leads to the question that Aram asked here in the thread “how does the system ‘knows’ that the evolution is complicated.”)

b) Understanding quantum noise need not be in terms of a specific noise model but rather in terms of implicit conditions for the noise model.

Often, people require that my conjectures should be supported by an explicit ” ‘microscopic’ model of noise,” while I find implicit mathematical conditions perfectly adequate.

These two claims represent very simple logic. Still, while they do not involve cuspidal representations or derived functors, to the best of my judgement, there is substance to these claims even if they were incorrect. (But they are both correct.) In any case, I find it interesting to discuss them.

2) If quantum fault-tolerance is impossible (or *when* quantum fault-tolerance is absent for whatever reason) then I would expect that efficient classical approximations are possible. I did not study specifically the connections between varietal dynamics that John (and Aram) discussed and my conjectures/point of view, but this can certainly be very interesting.

3) A slogan (and an analogy) which describe my point of view is that “*the importance of quantum fault-tolerance to physics is similar to the importance of nondeterministic computation in the theory of computation – their importance is that they cannot be achieved.*” As thought experiments 1) What would be* your* predictions based on a principle of “no quantum fault-tolerance?” 2) Consider a hypothetical scenario where for a few decades people were trying to find efficient algorithms for NP-complete problems believing that this is possible. What could be then the additional predictions coming from a point of view that this is simply impossible.

**Nick Read** asserts “When we [we ≡ "Hilbertists"?] say the TQC is inherently fault tolerant, we are not kidding. We mean that notwithstanding the coupling of e.g. charged electrons to the electromagnetic environment as in the fractional QH effect — so our systems do already contain ‘noise’ — the topological degrees of freedom are stable against decoherence, and fault-tolerant gate operations and initialization are possible, up to exponentially-small corrections.”

Physically speaking, Hilbertism’s appreciation of TQC is substantially grounded in the (postulated) invariant reproducibility and unbounded accuracy of the quantum metrology triangle (QMT, a.k.a. , with , , and measured independently), and in particular, on the topologically robust quantum coherence that is associated to both the measurement of (via the inverse AC Josephson Effect) and the measurement of (via the integer Quantum Hall Effect (IQHE) ).

There are at least three grounds for humility in regard to whether the QMT’s success (to date) assures the future success of TQC:

• **Reflection I** Our present understanding of noise mechanisms generally in QMTs, and particularly in the IQHE, is very far from comprehensive (*e.g*, per the Weiss and von Klitzing work **cited below**).

• **Reflection II** It is not presently known whether an entirely satisfactory simulation of QMTs in general (and IQHE measurement processes in particular), might perhaps achievable with PTIME computational resources, within the restricted domain of varietal dynamics (regarded as a formal model of the Kalai Postulates, and compatibly, a *restriction* of Hilbertism to Dirac-Maxwell dynamics).

• **Reflection III** Strenuous experimental efforts to observe nonabelian anyon dynamics have not succeeded to date, even for the simplest systems (*i.e.,* one or two anyons, although to be sure, some tantalizing hints *are* seen).

——————

In light of these reflections, here is an engineer’s appreciation of the Kalai/Harrow debate (which henceforth is depersonalized by calling it the “Hilbertism/Arnol’dism” debate):

**Hilbertism asserts** “Quantum computing has left the Extended Church-Turing Thesis hanging by a fingernail”; and so we can confidently foresee that TQT (or similar advances in quantum computing) will “force Gil Kalai to admit that he was wrong.”

**Arnol’dism asserts** *La marée étale* (“the slack tide”) of 20th century Hilbert-space dynamics has already given way to a 21st century *marée montante* (“flood tide”) of varietal dynamics; first QMTs, and then TQTs — and eventually (we may hope) *all* of computational dynamics — are destined to be immersed in the rising tide of our ever-deepening mathematical understanding.

**Note** The name “Arnol’dism” is proposed, first, to honor Vladimir Arnol’d’s creative lifetime achievements, that spanned multiple domains of mathematics, physics, engineering (and even “Pushkinistics”), and second, to honor Arnol’d’s maxim “It is not shameful to be a mathematician but it is shameful to be *only* a mathematician.”

Arnol’d’s maxim directs our reflections toward …

**Grothendieck’s Wager** Supposing that decades from now (or even centuries from now) Hilbertism is proven to be entirely correct (e.g., by the experimental demonstration large-scale fully-programmable FTQCs), such that Arnol’dism is entirely discredited, then nonetheless *it is rational to ardently embrace Arnol’dism’s marée montante* — at least during the next few decades fo the 21st century — on the pragmatic grounds that an ever-hotter planet, with ten billion hominids upon it, can scarcely afford to deprecate the ever-improving Arnol’dian simulation technologies upon which the 21st century *already* relies to supply its *marée montante* of absolute requirements for food, clothing, shelter, and medical advances.

To say nothing of the already-urgent and ever-mounting need to create family-supporting jobs that supply these needs.

**Conclusion** Even if Hilbertism is true, our generation of mathematicians, engineers, and scientists is well-advised to embrace Arnol’dism wholeheartedly, so that *la marée montante* of our mathematical understanding — and in consequence *la marée montante* of our PTIME simulation capabilities — rises as high as feasible, and as fast as feasible, for as long as feasible.

The *main* point is evident (I hope): the Kalai Postulates can be concretely reduced to Standard Dynamical Conjecture(s), that offer a path toward — that dangerous article — a reasoned, respectful *consensus* shared by Kalaists and the Harrowites, accompanied by a vigorous flow of creative research and productive enterprises. One hopes so, anyway!

• Varietal dynamics that “blows up” to QED field theory, versus

• QED field theory that “pulls back” to varietal dynamics.

The former accepts a dynamical restriction, namely, “Quantum computing Hamiltonians are required to respect Maxwell-Dirac relativistic gauge invariance.” The latter accepts a geometric restriction, namely “The singularities of varietal state-spaces must be computationally and thermodynamically occult.”

It is entirely reasonable (as it seems to me) to suppose that the dynamical restriction is dual to the geometric restriction, *i.e.* they are the *same* restriction.

From this dual perspective, it is natural to regard the “M” in Ed Witten’s celebrated M-theory as standing for what **the Green Sheet seminar notes** call *mises en pratique* in engineering roles, as dual to *la marée montante* in math-and-physics roles.

Here we can scarcely go wrong (as it seems to me) by following Grothendieck’s advice to Ronnie Brown:

The question you raise “how can such a formulation lead to computations” doesn’t bother me in the least! Throughout my whole life as a mathematician, the possibility of making explicit, elegant computations has always come out by itself, as a byproduct of a thorough conceptual understanding of what was going on. Thus I never bothered about whether what would come out would be suitable for this or that, but just tried to understand — and it always turned out that understanding was all that mattered.

Following Grothendieck’s advice, it is natural to pullback Gil Kalai’s postulates onto concrete conjectures like:

The Standard Conjecture of Varietal DynamicsVarietal dynamical simulations can rigorously instantiate, with PTIME computational resources, the invariant reproducibility and unbounded accuracy of quantum metrology triangles.

One nice aspect of this conjecture is that it respects the spirit of the Kalai Postulates (and even formally models them) without tackling divisive articles of faith regarding “The Church of the Larger Hilbert Space.” Instead, this conjecture directs our attention toward innumerable wonderful articles, ranging from Weis and von Klitzing’s ultra-concrete “Metrology and microscopic picture of the integer quantum Hall effect” (2011) to Herwig Hauser’s ultra-abstract “The Hironaka theorem on resolution of singularities (or: a proof we always wanted to understand)” (2002).

The main advantage that pretty much any modern-day researcher possesses — relative to ultra-talented 20th century polymaths like (*e.g.*) Paul Dirac and John von Neumann — is that we can draw from a pool of math-and-physics literature that is wider and deeper than Dirac and von Neumann ever dreamed of. And so (as it seems to me) the more deeply we all draw from this pool, the more effectively Kalai/Harrow debate can fulfull its immense and even strategically and morally crucial potential.

**Peter Shor** wonders “Does your theory lead to any concrete predictions besides ‘quantum computers cannot work”?”

The Kalai postulates, as formally modeled by **varietal dynamics** have led to concrete experiments, a concrete PhD thesis (coming this summer), concrete patent applications, and (most recently, and most gratifyingly) a concrete tenure-track faculty appointment for Joe Garbini and my graduate student Rio Picone … with no post-doc required.

These are four concrete reasons why we “varietal engineers” have evolved considerable (and increasing) respect for the Kalai Postulates.

]]>It is true that I have a blind spot in that I am unable to understand your noise models apart from their conclusions (QCs don’t work). But are there any physicists who understand your noise models? Even ones who may agree with your conclusions, like Alicki, presumably get there in different ways.

Maybe what I am missing is that the conclusion “QC don’t work” is in fact the only premise? And the rest of your work is trying to explore the consequences of that?

]]>