One of the exciting directions regarding applications of computers in mathematics is to use them to experimentally form new conjectures. Google’s DeepMind launched an endeavor for using machine learning (and deep learning in particular) for finding conjectures based on data. Two recent outcomes are toward the Dyer-Lusztig conjecture (Charles Blundell, Lars Buesing, Alex Davies, Petar Veličković, Geordie Williamson) and for certain new invariants in knot theory (Alex Davies, András Juhász, Marc Lackenby, Nenad Tomasev). There is also a Nature article Advancing mathematics by guiding human intuition with AI, on these developements. Here are also links to a new MO question and an old one on applications of computers to mathematics.

In a post on the “G-programme” I mentioned the Dyer-Lusztig conjecture (Problem 11) and a much much more general fantasy (Problem 12), So let me quote this part of the post here.

### C) Regular CW-spheres and Kazhdan-Lusztig polynomials

What happens when you give up also the lattice property? For Bruhat intervals of affine Coxeter groups the Kazhdan Luztig polynomial can be seen as subtle extension of the toric g-vectors adding additional layers of complexity. Of course, historically Kazhdan-Lusztig polynomials came before the toric g-vectors. (This time I will not repeat the definition and refer the readers to the original paper by Kazhdan and Lusztig, this paper by Dyer and this paper by Brenti. Caselli, and ^{ }Marietti.) It is known that for Bruhat intervals with the lattice property the KL-polynomial coincide with the toric *g*-vector. Can one define h-vectors for more general regular CW spheres?

**Problem (fantasy) 12:** Extend the Kazhdan-Luztig polynomials (and show positivity of the coefficients) to all or to a large class of regular CW spheres.

This is a good fantasy with a caveat: It is not even known that KL-polynomials depend just on the regular CW sphere described by the Bruhat interval. This is a well known conjecture on its own.** (This is the Dyer-Lusztig conjecture.)**

**Problem 11:** Prove that Kazhdan-Lustig polynomials are invariants of the regular CW-sphere described by the Bruhat interval.

A more famous conjecture was to prove that the coefficients of KL-polynomials are non negative for all Bruhat intervals and not only in cases where one can apply intersection homology of Schubert varieties associated with Weil groups. (This is analogous to moving from rational polytopes to general polytopes.) In a major 2012 breakthrough, this has been proved by Ben Elias and Geordie Williamson following a program initiated by Wolfgang Soergel.

“Deep mind” has created a new theorem using machine learning and synthetic intelligence. The UK and Australian groups reported about their new results in a Nature article. “Deep mind” used machine learning to connect two different areas and found a theorem which is one of the first results that connects the algebraic and geometric invariants of knots. It seems to me, however, that the conclusions of the paper are not warranted by the findings.

In their paper, the “Deep mind” group writes: “machine learning can aid mathematicians in discovering new conjectures and theorems. We propose a process of using machine learning to discover potential patterns and relations between mathematical objects, understanding them with attribution techniques and using these observations to guide intuition and propose conjectures. “Deep mind” writes: “The framework helps guide the intuition of mathematicians in two ways” and writes that they “use machine learning that has led to mathematical insight”.

In science, philosophers distinguish between the context of discovery and the context of justification. Intuition belongs to the context of discovery. The context of discovery cannot be subject to a logical explanation. No synthetic intelligence and machine learning can imitate the context of discovery.

Take for example, Henri Poincaré’s creativity. He found a relationship between automorphic functions and non-Euclidean geometry during an excursion. The events of a trip he took to Coutances made him forget his mathematical work on automorphic functions. Having arrived at Coutances, he entered an omnibus; the moment he put his foot on the step, the idea came to him, without anything in his former thoughts seeming to have prepared him for it – the transformations he had used to define the automorphic functions were identical with those of non-Euclidean geometry. Another example: Einstein would arrive at his breakthroughs through gedankenexperiments such as chasing after a light ray at the speed of light and a man falling freely from a roof under the influence of gravity. Synthetic intelligence cannot imitate this process of creativity, i.e., it cannot function as the step of the omnibus because Poincaré’s creativity and solving his problem cannot be translated into algorithms. That is, one cannot train a model to go on a trip, put your foot on the step of a bus, and then Eureka! find relationships the way Poincaré had done so. The reason is that one day, while walking on the cliff, another idea arose to Poincaré, with just the same characteristics of suddenness as it had in Coutances. He realized that the undefined arithmetic transformations of quadratic forms were identical with those of non-Euclidean geometry. So, now train the model to flip pictures of cliffs and we will virtually walk on them. Will we be like Poincaré? Of course not.

Comparto tus apreciaciones. Sobre todo en el asunto del “contexto del descubrimiento” ○ Aprendizaje profundo. El argumento que apoyas usando como recurso didáctico el caso de Poincaré, incluso, podría tener mas aristas. Gracias. Contáctame. Encontré un ejemplo tipo prueba respecto a un problema que apoya, en parte, tus análisis.

Un abrazo.

The stories about Einstein and Poincare are very nice, but I don’t see a reason why computerized systems cannot “see” connections, gain “intuition”, and have Eureka moments. It is not even clear that it will be harder for them to find ingenious connections and amazing insights compared to other mathematical tasks. Of course, so far, the success is quite limited.

Hi Gil, quick question about the Dyer-Lusztig conjecture. As I understand it, it is usually formulated in terms of the Bruhat *graph* (of the interval), the directed graph which has an edge from x to y if the length of y is greater than that of x and we get y from x by multiplication of a transposition. But you seem to have formulated it just in terms of the Bruhat poset. To be more concrete, in the Hasse diagram of the Bruhat poset we do not have all the edges of the Bruhat graph, we just have those corresponding to a transposition multiplication which increases the length by exactly one. Do you know if these two formulations are equivalent? In other words, if I hand you the Bruhat poset (of the interval), can you in fact tell me the whole Bruhat graph?

Hi Sam, I vaguely remember that the poset does tell you the graph. The original definition of KL-polynomial is quite combinatorial but it relies on certain labeling which could not be recovered from the poset (ot graph). But I will welcome a more knowledgeable answer.

Ah, thanks, I believe I found the relevant paper which explains that the Bruhat order indeed determines the Bruhat graph: M. Dyer, On the “Bruhat graph” of a Coxeter system, Compositio Mathematica (1991): http://www.numdam.org/item/CM_1991__78_2_185_0/.

And now I see that you already linked to this paper of Dyer in the post. Silly me!

A useful review by Ernest Davis https://arxiv.org/abs/2112.04324v2