Here is an excellent question asked by Liza on “Mathoverflow“.

**What is the explanation of the apparent randomness of high-level phenomena in nature? For example the distribution of females vs. males in a population (I am referring to randomness in terms of the unpredictability and not in the sense of it necessarily having to be evenly distributed).**

**1. Is it accepted that these phenomena are not really random, meaning that given enough information one could predict it? If so isn’t that the case for all random phenomena?**

**2. If there is true randomness and the outcome cannot be predicted – what is the origin of that randomness? (is it a result of the randomness in the micro world – quantum phenomena etc…)**

**Where can I find resources about the subject?**

Some answers and links can be found below the question in MO. (The question was closed after a few hours.) More answers and further discussion are welcome here.

Related posts: noise, Thomas Bayes and probability, four derandomization problems, some philosophy of science

And here is a related post on probability by Peter Cameron relating to the question “what is probability”.

### Like this:

Like Loading...

*Related*

I don’t remember where I first heard this idea, but one way to think about what it means to say that a physical process is “random” is to say that there is no algorithm which predicts its behavior precisely which runs significantly faster than the process itself. Morally I think this should be true of many “high-level” phenomena.

@Qiaochu Yuan:

Couldn’t you have processes though where you could divide the outcome of the process into a few categories and accurately and quickly predict which category the outcome will fall into even though predicting the behavior to a high level of precision would take a lot of time – for instance, it would be possible to predict (I think) with good accuracy whether a given set of initial conditions will lead to a shock wave in an airflow but the process by which the shock wave forms might take up much more time.

E. T. Jaynes took the position that “probabilities do not describe reality — only our information about reality.” Many supposedly random phenomena are perhaps more accurately called ergodic, deterministic with good mixing properties. Like pseudo random number generators. We can effectively model phenomena as random without settling the question of whether they are “really” random. I suppose that’s a positivist perspective.

For a metaphysical perspective, see this quote from C. S. Lewis.

The OP might want to look at the book “Complexity: A Guided Tour” by Melanie Mitchell. It’s sadly lacking in actual mathematical content, but it deals with just these sorts of questions and could provide references for further study.

Reality and our models of reality must be distinguished. Regarding reality it is still unclear if there is a truly random phenomenon in nature (radioactive decay ?). Regarding our models, if you are lucky enough, phenomena in nature, can be captured by a random rule ; with more luck, by a deterministic rule (which can be seen a special case of rendomness)…but if unlucky they might be no captured at all.

Sometime ago I was thinking about a similar question for some formal objects such as relations (= digraphs): some digraphs can be constructed according to deterministic rules (de Bruijn and Kautz based on strings; Cayley based on permutations; Kneser based on subsets, Circulants and Payley based on partitions…see for example http://en.wikipedia.org/wiki/Category:Parametric_families_of_graphs) that we can call structured digraphs, some others by random rules (Erdos-Reny model and many others).

Now, are there digraphs for which we can prove that they can not be constructed by random nor deterministic rules ?

proaonuiq, interesting question! I suppose by “constructed by random rules”, you mean that the graph is output with some non-negligible probability by a randomized algorithm. Then, if BPP is strictly smaller than NP, you can let your desired graph encode the solution of some instance in NP – BPP. But I guess this is not very satisfying in some sense.

Lifeofpi, by random graphs i refer to graphs or digraphs obtained by a random process such as Erdos-Reny model or others. For a quick general introduction you can check at http://en.wikipedia.org/wiki/Random_graph or at http://mathworld.wolfram.com/RandomGraph.html.

And for a more complete survey

http://www.win.tue.nl/~rhofstad/NotesRGCN2009.pdf.

For an example such as lightning, the standard explanation in physics is that macroscopic randomness ultimately comes from thermal entropy via processes such as Brownian motion and “the butterfly effect”. By Brownian motion, I mean here that thermal motion causes vibration of particles, not just that the Brownian stochastic differential equation models this or that behavior. The butterfly effect is the statement that many physical systems such as air flow exhibit chaotic dynamics. Air flow is modeled by the Navier-Stokes equation to good approximation, and that plus a percolation model gives you lightning.

Thermal entropy in turn comes from two sources. It comes from quantum randomness, and it also comes from chaotic dynamics. One explanation of quantum randomness is the phenomenon that if A and B share an entangled state with no entropy, then the marginal state of A alone has entropy.

Even though there are these different flavors of randomness, it is impossible to make clean distinctions among them, or between them and the gambler’s view that randomness is incomplete information. Certainly incomplete information is sometimes a better description of randomness in higher biology. The behavior of a dog may look random to you, but maybe the dog knows its plans and didn’t tell you. (Or instead of a dog, another human!) It is perhaps a philosophical point, but my own view is that all randomness is equivalent and that the distinctions are secondary.

I give a link: http://video.ias.edu/Search-for-Randomness which was an introduction, what is randomness for me… Enjoy…

“Now, are there digraphs for which we can prove that they can not be constructed by random nor deterministic rules ?”

I answer myself: all digraphs can be constructed by a random (for example, uniform) process but i guess the probability of picking (or obtaining at the end of the process) an structured digraph (as defined above) should tend to 0 as the size of the digraph tends to infinity (increases). So again, in this context structure must be considered as a special case of randomness. So now there are two questions:

— regarding its properties (coloring, hamiltonicity…), do they really differ (i.e. if a property is showed to hold a.s. for random digraphs can we conclude that it also holds a.s. for the structured ones) ?

— is there a deterministic construction process so “powerfull” that all digraphs of any size can be constructed according to it, i.e. such that every digraph can be considered as structured ?

Gil has asked me to elaborate on a comment I made on one of the answers to the MathOverflow question mentioned above. This has to do with the perceived “randomness” in quantum mechanics. I should preface by saying that I don’t claim to possess any special insight into “randomness” largely due to not having thought about it seriously. The point of my original comment was simply to remark that time evolution in quantum mechanics is deterministic.

In classical mechanics time evolution is given by the flow of a hamiltonian vector field X_H on the phase space of the system under study. This is a first-order ordinary differential equation and hence provided that that the hamiltonian is sufficiently differentiable, standard theorems guarantee the existence and uniqueness of solutions to the initial value problem. In other words, through each point in the phase space, there passes a unique curve x(t) whose velocity x'(t) is given by the value X_H(x(t)) of the hamiltonian vector field at x(t). This is the statement that classical dynamics are deterministic. This does not mean that classical phenomena cannot exhibit “randomness” (just watch this video of a double pendulum: http://www.youtube.com/watch?v=z3W5aw-VKKA ) or chaotic behaviour (will it rain in Edinburgh next Thursday?). This is not due to lack of determinism, but due to incomplete knowledge of the state of the system.

Similarly, quantum mechanics is deterministic. A (pure) state of a system is given by a unit-norm vector \psi in a Hilbert space and its time evolution is even simpler: it is now a linear ordinary differential equation (albeit in an infinite-dimensional setting) which says that the time derivative \psi'(t) = H \psi(t), where H is the hamitonian. Assuming that H is self-adjoint, the Stone theorem says that there is (strongly continuous) unitary U(t) such that \psi(t) = U(t)\psi(0). Hence if you know the state of the system at time zero, you know the state at any other time. A similar story applies to mixed states, which are convex linear combinations of pure states.

Although it is hard to be certain, when most people refer to the “randomness” in quantum mechanics, they are usually referring to the probabilistic nature of quantum physics. There is a LOT one could say about this. The main problem, I think, stems from trying to get a quantum system to answer classical questions. The point is basically that classical concepts are in some sense approximations to what’s really going on. Of course, our intuition is classical, so it should not be surprising that when we try to apply it to quantum phenomena we find some strangeness. To take but one example, the trajectory of a classical particle makes perfect sense. Hence in the double-slit experiment one might think that it’s valid to ask the question “which slit did the photon go through?” This only makes sense because we are assuming that the photon follows some classical trajectory. In quantum mechanics this concept does not exist, so it is not surprising that one cannot give a precise answer to that question. The best compromise is a probilistic answer. Same thing applies to the question of “when will a certain particle decay?”, which has killed (or not!) so many imaginary cats in thought experiments.

This story can be made mathematically very precise. In his book on the mathematical foundations of quantum mechanics, George Mackey has what to my mind is a beautiful treatment of the structure of classical and quantum mechanics and of their statistical counterparts from the perspective of the measurements one can make. (The book dates from the 1960s and I read it as a graduate student in the 80s. I have not worked on this topic, so I am not aware of what must certainly be a sizeable modern literature on the topic.)

Dear all,

Many thanks for all the answers!

José, a clarifying answer !

This comment is just to remaind again that both classical and quantum mechanics (which is a good but possibly not the definite) are models of reality but not the reality himself.

For me, having neither think about it in deep, so intuitively, what we, observers, call a random phenomenon in reality is just a phenomenon that we can be described with a probabilistic model: for instance a phenomenon in an experiment (such as the double-slit experiment you quote) for which in response to “exactly” same conditions different outputs are obtained, but in such a way that they can be captured by a probabilistic model (in wide sense).

So as you point, when an observer says “this phenomenon is random because i can describe it with a probabilistic model” he is making an epistemological assertion not an ontological one. It is not impossible that as knowledge advances this epistomologicaly random phenomenon can be explained later in a more deterministic maner (that is, same conditions, always same output).

Thank you Gil for promoting this discussion, I have learned a lot.

It is a great pleasure seeing people from the academia taking part in such public discussions.

José: I can’t really address the issue in just one blog post; I would want at least an hour lecture if not an extended seminar or a long e-mail discussion. But as a student of quantum probability, which is the same as non-commutative probability, I think that the deterministic interpretation of vector states in quantum mechanics is untenable.

A hopelessly short version: A mixed state is the natural quantum generalization of a probability distribution, and a pure state is a mixed state that happens to be extremal. If the algebra of random variables is all bounded operators on a Hilbert space, then the vector states that you emphasize correspond to pure states; but if it is only some of the operators, then actually a vector state can represent any mixed state.

Pingback: Episode 13: What Are the Metaphysical Implications of Quantum Physics? | The Partially Examined Life | A Philosophy Podcast

Pingback: Randomness in Nature II « Combinatorics and more

Pingback: Itamar Pitowsky: Probability in Physics, Where does it Come From? « Combinatorics and more

This is actually a very important topic but for a reason you might not expect. The existence of randomness in reality is LOGICALLY untenable. But the leap of faith that requires somebody to believe in randomness actually requires them to SUSPEND logic altogether, with their answer being “That’s just the way it is”. This has a more serious effect than you might imagine. It actually totally de-necessites the exploration of anything that is beyond their/our current understanding, because it can now be considered as “random”, and thus causes total and absolute scientific stagnation. If the existence of randomness had been universally accepted 1000 years ago, most of science wouldn’t exist now. But now, we are here, in 2010 with some people still believing that “randomness” can exist in reality, knowing full well that this voids, for things not already understood, the question “why” altogether. And when we stop asking “why”, progress is lost altogether. So this is more serious for human progress than you might have imagined, and people need to wake up. (P.S. if you’re still asking “what about quantum?”… we can use quantum formulae to predict nothing more than the range of what might happen based on known + unknown, they have nothing in the way of showing us that real-life mechanics have such an actual thing as “randomness”, so wake up)

Navigeteur, I understand only fragments of what you are saying, and disagree.

Gil, discussion is a great thing. It’d be great if you tell me what parts you disagree with, and why you think I’m wrong. I’d love to know. It all seems clear and obvious to me but I may not have communicated it well either.

To explain, take, for example, us as human beings. You know what you are going to do next. That action would be based on your desire and your understanding at that time. The people who know you best may be able to predict it, but to a limited degree (there may be some uncertainty). Others may be able to predict this to a less certain degree (maybe more uncertainty). “Non-human” observers would have absolutely no idea (if they existed) (total uncertainty). The “non-human” observers may be divided between those who say “this is totally random” and those who say “there are reasons but I don’t know what they are” (which is of course true). So, to ask why or not to ask why is the ONLY question which relates to randomness. Only if I can assert that an event has no reason can I assert that it is “random”. So randomness is the GIVING UP or ABDICATION of reasoning, and nothing more. That’s why it’s the birth of scientific stagnation.