Navier-Stokes Fluid Computers

Smart fluid

Terry Tao posted a very intriguing post on the Navier-Stokes equation, based on a recently uploaded paper Finite time blowup for an averaged three-dimensional Navier-Stokes equation.

The paper proved a remarkable negative answer for the regularity conjecture for a certain variants of the NS equations, namely (or, perhaps, more precisely) the main theorem demonstrates finite time blowup for an averaged Navier-Stokes equation. (This already suffices to show that certain approaches for a positive answer to the real problem are not viable.) The introduction ends with the following words.

“This suggests an ambitious (but not obviously impossible) program (in both senses of
the word) to achieve the same e ffect for the true Navier-Stokes equations, thus obtaining a negative answer to Conjecture 1.1 (the regularity conjecture for 3D NS equation)… Somewhat analogously to how a quantum computer can be constructed from the laws of quantum mechanics [Here Tao links to Benio ff’s 1982 paper: “Quantum mechanical Hamiltonian models of Turing machines,”], or a Turing machine can be constructed from cellular automata such as “Conway’s Game of Life” , one could hope to design logic gates entirely out of ideal fluid (perhaps by using suitably shaped vortex sheets to simulate the various types of physical materials one would use in a mechanical computer). If these gates were sufficiently “Turing complete”, and also “noise-tolerant”, one could then hope to combine enough of these gates together to “program” a von Neumann machine consisting of ideal fluid that, when it runs, behaves qualitatively like the blowup solution used to establish Theorem 1.4.[The paper’s main theorem] Note that such replicators, as well as the related concept of a universal constructor, have been built within cellular automata such as the “Game of Life.”

Once enough logic gates of ideal fluid are constructed, it seems that the main difficulties in executing the above program are of a “software engineering” nature, and would be in principle achievable, even if the details could be extremely complicated in practice. The main mathematical difficulty in executing this “fluid computing” program would thus be to arrive at (and rigorously certify) a design for logical gates of inviscid fluid that has some good noise tolerance properties. In this regard, ideas from quantum computing (which faces a unitarity constraint somewhat analogous to the energy conservation constraint for ideal fluids, albeit with the key di fference of having a linear evolution rather than a nonlinear one) may prove to be useful. (Emphasis mine.)

Interesting idea!

And what Tao does go well beyond an idea, he essentially implement this program for a close relative of the NS equation!  I am not sure if universal computing is established for these systems but the proofs of the finite-time blow up theorem certainly uses some computational-looking gadget, and also as Terry explains some form of fault-tolerance.

Somewhat related ideas (unsupported by any results, of course,) appeared in the seventh post “Quantum repetition” of my debate with Aram Harrow on quantum computing.  (See, e.g., this remark, and this one, and this one.) The thread also contains interesting links, e.g. to Andy Yao’s paper “Classical physics and the Curch-Turing Thesis.”  In addition to the interesting question:

Does the NS-equation in three-dimension supports universal (classical) computation,

Can NS-equations in two dimension be approximated in any scale by bounded depth circuits?

One general question suggested there was the following: “It can be of interest (and perhaps harder compared to the quantum case) to try to describe classical evolutions that do not enable/hide fault tolerance and (long) computation.”

Another interesting comment by Arie Israel is: “I was surprised to learn that experimental fluid mechanics people had thought of this analogy before. Apparently the key name is ‘Fluidics’ and those ideas date back at least to the sixties.”

Update: Here is the first paragraph from a nice article by  Erica Klarreich entitled A Fluid New Path in Grand Math Challenge on this development in Quanta Magazine:

In Dr. Seuss’s book “The Cat in the Hat Comes Back,” the Cat makes a stain he can’t clean up, so he calls upon the help of Little Cat A, a smaller, perfect replica of the Cat who has been hiding under the Cat’s hat. Little Cat A then calls forth Little Cat B, an even smaller replica hidden under Little Cat A’s hat. Each cat in turn lifts his hat to reveal a smaller cat who possesses all the energy and good cheer of the original Cat, just crammed into a tinier package. Finally, Little Cat Z, who is too small to see, unleashes a VOOM like a giant explosion of energy, and the stain disappears.

And here is a follow up post on Tao’s blog (and a few more II, III), and a post on Shtetl Optimized.

The flip side

Update (June 14): It is worth noting that while the purpose of Tao’s program is to show finite-time blow up of the 3D Navier Stokes equations (as is often the case) these lines of ideas can potentially be useful also toward a positive solution of the regularity conjectures. Specifically, one can try to show that 3D Navier-Stokes equations do not support universal classical computation and even more specifically do not support classical fault-tolerance and error correction. Also here some analogy with quantum computation can be useful: It is expected that for adiabatic processes computation requires “spectral gap” and that gapped evolutions with local Hamiltonians support only bounded depth computation. Something analogous may apply to NS equations in bounded dimensions.

There are many caveats, of course,  the quantum results are not proved for D>1, NS equations are non-linear which weakens the analogy, and showing that the evolution does not support computation does not imply, as far as we know, regularity.

Three more remarks: 1) On the technical level an important relevant technical tool for the results on gapped systems with local Hamiltonians is the Lieb-Robinson inequality. (See, e.g. this review paper.)  2) for classical evolutions a repetition mechanism (or the “majority function”) seems crucial for robust computation and it will be interesting specifically to test of 3D Navier-stokes support it; 3) If computation is not possible beyond bounded depth this fact may lead to additional conserved quantities for NS, beyond the classical ones. (One more, June 28): It looks to me that the crucial question is if NS equations only support bounded computation or not. So this distinction captures places where circuit complexity gives clear mathematical distinctions.

When Noise Accumulates

I wrote a short paper entitled “when noise accumulates” that contains the main conceptual points (described rather formally) of my work regarding noisy quantum computers.  Here is the paper. (Update: Here is a new version, Dec 2010.) The new exciting innovation in computer science conference in  Beijing seemed tailor made for this kind of work, but the paper did not make it that far. Let me quote the first few paragraphs. As always, remarks are welcome!

From the introduction: Quantum computers were offered by Feynman and others and formally described by Deutsch, who also suggested that they can outperform classical computers. The idea was that since computations in quantum physics require an exponential number of steps on digital computers, computers based on quantum physics may outperform classical computers. A spectacular support for this idea came with Shor’s theorem that asserts that factoring is in BQP (the complexity class described by quantum computers).

The feasibility of computationally superior quantum computers is one of the most fascinating and clear-cut scientific problems of our time. The main concern regarding quantum-computer feasibility is that quantum systems are inherently noisy. (This concern was put forward in the mid-90s by Landauer, Unruh, and others.)

The theory of quantum error correction and fault-tolerant quantum computation (FTQC) and, in particular, the threshold theorem which asserts that under certain conditions FTQC is possible, provides strong support for the possibility of building quantum computers.

However, as far as we know, quantum error correction and quantum fault tolerance (and the highly entangled quantum states that enable them) are not experienced in natural quantum processes. It is therefore not clear if computationally superior quantum computation is necessary to describe natural quantum processes.

We will try to address two closely related questions. The first is, what are the properties of quantum processes that do not exhibit quantum fault tolerance and how to formally model such processes. The second is, what kind of noise models cause quantum error correction and FTQC to fail.

A main point we would like to make is that it is possible that there is  a systematic relation between the noise and the intended state of a quantum computer. Such a systematic relation does not violate linearity of quantum mechanics, and it is expected to occur in processes that do not exhibit fault tolerance.

Let me give an example: suppose that we want to simulate on a noisy quantum computer a certain bosonic state. The standard view of noisy quantum computers asserts that under certain conditions this can be done up to some error that is described by the computational basis. In contrast, the type of noise we expect amounts to having a mixed state between the intended bosonic state and other bosonic states (that represent the noise).

Criticism: A criticism expressed by several readers of an early version of this paper is that no attempt is made to motivate the conjectures from a physical point of view and that the suggestions seem “unphysical.” What can justify the assumption that a given error lasts for a constant fraction of the entire length of the process? If a noisy quantum computer at a highly entangled state has correlated noise between faraway qubits as  we suggest, wouldn’t it allow signaling faster than the speed of light?

I had sort of a mixed reaction toward this criticism. On the one hand I think that it is important and may be fruitful  to examine various models of noise while putting the physics aside. Nevertheless, I added a  brief discussion of  some physical aspects. Here it is:

Detrimental Noise

John Lennon

Disclaimer: It is a reasonable belief  (look here, and here), and an extremely reasonable working assumption (look  here) that computationally superior quantum computers can be built.

(This post and the draft will be freely updated) I am struggling to meet the deadline of a week ago for a chapter regarding adversarial noise models for quantum error correction. (Update Nov 20: here is the draft; comments are welcomed. Update April 23: Here is the arxived paper, comments are welcome! ) My hypothetical model is called “detrimental.” (This is a reason substantial math postings are a bit slow recently; but I hope a few will come soon.) This project is quite central to my research in the last three years, and it often feels like running, over my head, after my own tail that might not be there. So this effort may well be a CDM (“career damaging move”) but I like it nevertheless. It is related to various exciting conceptual and foundational issues.

I do have occasionally a sense of progress, (often followed by a backtrack) and for this chapter, rather than describing detrimental noise by various (counterintuitive) properties as I always did, I think I have an honest definition of detrimental noise. Let me tell you about it. (Here is a recent useful guide: a zoo of quantum algorithms, produced by Stephen Jordan.)

Detrimental noise

Consider a quantum memory with $n$ qubits at a state $\rho_0$. Suppose that $\rho_0$ is a tensor product state. The noise affecting the memory in a short time interval can be described by a quantum operation $E_0$. Lets suppose that $E_0$ acts independently on different qubits and, for qubit $i$ with some small probability $p_i$$E_0$ changes it state to the maximum entropy state $\tau_i$.

This is a very simple form of noise that can be regarded as basic to understanding the standard models of noise as well as of detrimental noise.

In the standard model of noise, $E_0$ describes the noise of the quantum memory regardless of the state $\rho$ stored in the memory. This is a quite natural and indeed expected form of noise.

A detrimental noise will correspond to a scenario in which, when the quantum memory is at a state $\rho$ and $\rho= U \rho_0$, the noise $E$ will be $U E_0 U^{-1}$. Such noise is the effect of first applying $E_0$ to $\rho_0$ and then applying $U$ to the outcome noiselessly.

Of course, in reality we cannot perform $U$ instantly and noiselessly and the most we can hope for is that $\rho$ will be the result of a process. The conjecture is that a noisy process leading to $\rho$ will be subject to noise of the form we have just described. A weaker weaker conjecture is that detrimental noise is present in every natural noisy quantum process. I also conjecture that damaging effects of the detrimental noise cannot be canceled or healed by other components of the overall noise.When we model a noisy quantum system either by a the qubits/gates description or in other ways we make a distinction between “fresh” errors which are introduced in a single computer cycle (or infinitesimally when the evolution is described by a continuous model) and the cumulative errors along the process. The basic insight of fault tolerant quantum computing is that if the incremental errors are standard and sufficiently small then we can make sure that the cumulated errors are as well. The conjecture applies to fresh errors.

(Updated: Nov 19; sorry guys, the blue part is over-simplified and incorrect; But an emergency quantifier replacement seemed to have helped; it seems ok now)  The definition of detrimental noise for general quantum systems that we propose is as follows:

A detrimental noise of a quantum system at a state $\rho$ commutes with every some non-identity quantum operation which stabilizes $\rho$.

Note that this description,

Just like for the standard model of noise, we do not specify a single noise operation but rather gives an envelope for a family of noise operations.

In the standard model of noise the envelope $\cal D_{\rho}$ of noise operations when the computer is at state $\rho$ does not depend on $\rho$. For detrimental noise there is a systematic relation between the envelope of noise operations ${\cal D}_\rho$ and the state $\rho$ of the computer. Namely,

${\cal D}_{U\rho} = U {\cal D}_\rho U^{-1}$.

Why is it detrimental?

Detrimental noise leads to highly correlated errors when the state of the quantum memory is highly entangled. This is quite bad for quantum error-correction, but an even more devastating property of detrimental noise is that the notion of “expected number of qubit errors” becomes sharply different from the rate of noise as measured by fidelity or trace distance. Since conjugation by a unitary operator preserves fidelity-metric, the expected number of qubit errors increases linearly with the number of qubits for highly entangled states.

Here is another little thing from my paper that I’d like to try on you:

A riddle: Can noise remember the future?

Suppose we plan a process and carry it out up to a small amount of errors. Can there be a systematic relation between the errors at some time and the planned process at a later time? Continue reading