In a previous post I mentioned the question of why is mathematics possible. Among the interesting comments to the post, here is a comment by Tim Gowers:

“Maybe the following would be a way of rephrasing your question. We know that undecidability results don’t show that mathematics is impossible, since we are interested in a tiny fraction of mathematical statements, and in practice only in a tiny fraction of possible proofs (roughly speaking, the comprehensible ones). But why is it that these two classes match up so well? Why is it that nice mathematical statements so often have proofs that are of the kind that we are able to discover?

I think part of the answer is that the evolution of mathematics follows what we can prove: it is quite easy to come up with simple statements that look hopeless (is zeta(3) a normal number? etc. etc.), so the match-up just mentioned is far from perfect. I think we develop an instinct for what kinds of statements are likely to be amenable to our techniques, and from time to time we are surprised by a statement like the Poincare conjecture that turns out to be very very hard.

I wonder whether the true answer might be something like this: there is a ‘random variable’ associated with ‘natural’ mathematical statements, which takes as its value the length of the shortest ‘humanly discoverable’ proof. We know that this variable can take very large values, or even be infinite, but on average it is fairly small (if only because a lot of statements you can write down have simple counterexamples). We get interested in the statements for which the random variable takes a large but, it seems to us, not too large, value. And empirically such statements exist.

Of course, all this is wild speculation. I don’t know how to formulate a precise question, but it would be fascinating if one could somehow do rigorous justice to the intuition that most natural statements have short proofs but a few are more difficult and a few are *much* more difficult. It would be something like a ‘quantitative statistical version of Godel’s theorem’.”

### Like this:

Like Loading...

*Related*

The first thing you would have to do with your program is define “natural” statement. I don’t believe this is possible, as what a “natural” statement is depends heavily on the context of known mathematics. For instance, you consider the question: “classify all finite simple groups”. It would be completely unmotivated without knowing the previous mathematical facts that (a) finite groups are interesting structures, (b) finite groups can be constructed from finite simple groups, (c) there are a few general classes which most finite groups belong to, and a few sporadic ones, (d) there is some chance that there are only a finite number of sporadic groups, so this problem has a hope of being solvable.

In (c), I meant finite

simplegroups, of course.“I don’t believe this is possible, as what a “natural” statement is depends heavily on the context of known mathematics.”

I may be misunderstanding you, but that seems like a non-sequitur. Or rather, it seems like a (correct) argument that any satisfactory definition of “natural” would have to be relative to a given corpus of knowledge. Is there any reason to suppose that it is impossible to come up with such a definition?

I should add that if what you think is impossible is coming up with a definition that is precise enough to prove rigorously some facts about how the lengths of nice proofs of natural statements are distributed, then I agree that it is very unlikely to be possible to do that except perhaps in some toy models of mathematics. That is, while I think it may be possible to define “natural” in an appropriate relative way, there is still a big difficulty about which corpus of knowledge one chooses when formulating a theorem. (In principle it could be a statement about arbitrary corpora satisfying certain conditions, but that seems even harder.) Maybe one could define a “natural building process” that builds up a corpus of mathematics from a few basic facts, and then argue that at each stage one expects a certain distribution of difficulty levels for the current open problems.

You’re right. I hadn’t thought of it, but “naturalness” should be defined relative to a corpus of knowledge. And ease of proof should also be defined relative to a corpus of knowledge. This lets us discuss a situation that I find really interesting, where a statement is natural with respect to one corpus of knowledge, but whose ease of proof depends on a completely different corpus of knowledge.

To the extent that mathematics has to do with reasoning about possible existence, or inference from pure hypothesis, a line of thinking going back to Aristotle and developed greatly by C.S. Peirce may have some bearing on the question of How and Why Mathematics is Possible. In that line of thought, hypothesis formation is treated as a case of “abductive” inference, whose job in science generally is to supply suitable raw materials for deduction and induction to develop and test. In this light, a large part of our original question becomes, as Peirce once expressed it —

Is it reasonable to believe that “we can trust to the human mind’s having such a power of guessing right that before very many hypotheses shall have been tried, intelligent guessing may be expected to lead us to the one which will support all tests, leaving the vast majority of possible hypotheses unexamined”? (Peirce,

Collected Papers, CP 6.530).The question may fit the situation in mathematics slightly better if we modify the word

hypothesisto sayproof.I copied out a more substantial excerpt from that paper here:

C.S. Peirce • The Proper Treatment of Hypotheses

The question of naturalness arises in many areas, from AI and cognitive science to logic and the philosophy of science, most often under the heading of “Natural Kinds”. Given a universe of discourse

X, the lattice of All Kinds would be its power set, and we seek the portion of that ordering which constitutes the Natural Kinds, the extensions of concepts or hypotheses that are worth considering in practice.To the same purpose, Peirce uses the criterion of “admissible hypotheses that seem the simplest to the human mind”.

The following project report outlines the three types of inference — Abductive, Deductive, and Inductive — as treated by Aristotle and Peirce, at least insofar as these patterns of reasoning can be analyzed in syllogistic forms. I did this work by way of exploring how a propositional logic engine might be used to assist in scientific inquiry.

• Functional Logic : Inquiry and Analogy

It looks a bit cobbled together to my eyes today and probably could use a rewrite, but I did put a lot of work into the diagrams and remain rather pleased with those.

It looks like that site is down at present. There’s another copy of the paper here:

• Functional Logic : Inquiry and Analogy

Are you sure that equation is correct in the top-left corner of the picture?

Of course it is, it is the standard formula for the solutions of the quadratic equation 4a^2x + 4abx + b^2-b+4ac = 0. 🙂

Damn it, I meant 4a^2x^2 + 4abx + b^2-b+4ac = 0

yes think these are key questions on the boundaries/frontiers of math. the

terra incognita. there is some strong connection here to kolmogorov theory & chaitins constant. suggest some answer to the question lies in a new category called quasi algorithms. automated theorem proving seems to impinge on these questions also.Pingback: The Kadison-Singer Conjecture has beed Proved by Adam Marcus, Dan Spielman, and Nikhil Srivastava | Combinatorics and more

There is a remark by Dawkins about “improving a design by evolution” and “improving a design by an engineer”, namely, in the first case the design can be changed only through small steps with the restriction that all of these “immediate” steps must themselves be functioning well. In spirit it is similar to the “natural building process” metaphor used before by Tim, but I think it still worth adding this remark as it emphasizes the constrained nature of new steps. For example in his logic textbook Manin remarks that most definition used in mathematics has very short (less than 4) maximal length of nested alternating quantifiers. If it is true it could be because of some cognitive constraint which maybe always present when people generate new mathematics. Of course it is not clear if this sort of thinking can answer the original question but it would suggest to look for the nature of limitations of the small steps by which human mathematics advances.

Pingback: What Is A Theorem That A Human May Prove It? | Inquiry Into Inquiry

Maybe it’s because even pure mathematics doesn’t just study arbitrary concepts, but concepts that are grounded in experience (e.g. groups <- symmetry, manifolds <- phase spaces, natural numbers <- counting, coordinates <- the space in which we live). This seems to be a good starting point for "naturalness". That it coincides with provability maybe suggests that either the universe is somehow logically or rationally constructed, and/or that mathematics simplifies these concepts to the point of tractability.

I think asking why math is possible but not easy is like asking why reading novels is possible but not easy. Novels will be written to be as complex as people can understand, but not more so. And we will solve the hardest math problems we can but no more.

If we were much less good at math, then we would find arithmetic to be interesting, and logarithms to be at the frontier, like Fermat’s last theorem.

If we were much better at math, then Fermat’s last theorem would be boring like arithmetic, and we would be working on vastly harder problems, like NP vs coNP or something.

somehow related, but other question is: are some mathematical “facts” true only by accident, without any relation to “other parts of mathematics”? I do not mean “something we may choose as we wish”, as in for example one may choose to work without axiom of choice framework” but something which is visible in mathematics ( as a theorem, interesting fact etc) and has no explanation nor proof at all. Naive example – why a map may be colored by “4” colors? Why 4? Why even number here? etc.

There are two examples that comes in mind:

1. independence of some theorems ( sometimes the axioms) from given set of sentences. AFAIK all of such examples have in common that they are independent from artificially chosen set of “axioms” or “theory” constructed by the humans. But are they “truly independent”? Is there any form of “true independence” in a place of “relative independence” fro what we choose as a base system? I presume not. So if we start from other set of axioms or with another theory, we obtain other “independence relations”.

2. some strange facts ( cryptomorphism by G.C.Rota in matroid theory comes in mind here) which at the end of the day gives us insight that they are parts of the more general view in which they has his own explanations.

So the only possibility here, in mathematics, is the relative independence and curious coincidence which has his own explanations, OR, there exists truly random mathematical facts which are “independent” in “global, objective” meaning – such as there exists some computational functions? It looks like there is no such thing – truly random mathematical relation or theorem…

Pingback: Przypadkowa matematykla – wpis wtóry | FIKSACJE

Consider a system of axioms A. We can choose different axioms A’, and obtain a theory which is equivalent to that derived from A. A bigger change would be to replace one of the axioms of A with a statement that is undecidable from A, and it is possible that the replaced axiom becomes now undecidable, from the new axioms. We can change the axioms, somehow similar to changing a basis in a vector space. In the case of a vector space, for a particular problem it may be more convenient to work with a particular basis. Similarly, we choose the axioms which are of most interest to the questions we want to investigate. The questions tend to be interconnected, and cluster together in the space of statements of a particular field. By choosing the axioms within the cluster, it is more likely to ask questions that are “closer”, in terms of length of a proof, to our axioms, than questions that require lengthy proofs, or that are undecidable, although the latter ones may be much more than the former. Of course, it is possible to have exceptions: other clusters of interesting questions may exist in the space of statements of the field. Then, we may have a derived field or a subfield of interest, and if a theorem connects the second cluster to the first one, then we would call it “the fundamental theorem of …” whatever the second cluster is called. If we don’t know such a theorem, maybe we can conjecture a connection.

My main conjecture is that the statements of interest cluster together, and we tend to pick the axioms within the cluster.

Pingback: Avi Wigderson’s: “Integrating computational modeling, algorithms, and complexity into theories of nature, marks a new scientific revolution!” (An invitation for a discussion.) | Combinatorics and more