## Can deep learning be used to help mathematical research?

5

I am currently learning about deep learning and artificial intelligence and exploring his possibilities, and, as a mathematician at heart, I am inquisitive about how it can be used to solve problems in mathematics.

Seeing how well recurrent neural networks can understand human language, I suppose that they could also be used to follow some simple mathematical statements and maybe even come up with some proofs. I know that computer-assisted proofs are more and more frequent and that some software can now understand simple mathematical language and verify proofs (e.g. Coq). Still, I've never heard of deep learning applied to mathematical research.

Can deep learning be used to help mathematical research? So, I am curious about whether systems like Coq could be combined with deep learning systems to help mathematical research. Are there some exciting results?

2

See https://ai.stackexchange.com/q/7416/2444, especially, this answer (and also this one).

– nbro – 2020-05-21T17:36:20.900

I think your question isn't an exact duplicate, because your question is slightly more general, but if you think it's a duplicate, I will close it as such. – nbro – 2020-05-21T17:43:27.650

Yeah we could keep it open since it is slightly more general but the answers to the other question are already very interesting, thanks! – Antoine Labelle – 2020-05-21T18:03:39.480

This is difficult but not impossible. Deep Learning is a powerful tool, but it will depend on how the problems are formed. I mean it's not easy to make your model do researches like this, it will depend more on how these mathematical researches can be turned to formal problems. – Karam Mohamed – 2020-05-21T19:31:01.220

1Look for research published by FAIR (Facebook Artificial Intelligence Research) on application of language (NLP) and deep learning to mathematics. But yes, every different class of research problem is different and hence most likely need to be formulated specially so you could use deep learning. In other words, given enough example mapping between question (X) and expected answer (Y), you are more likely to successfully build a deep learning model. – CypherX – 2020-05-22T01:54:16.707

2

Paper on Facebook AI research: "Symbolic Mathematics Finally Yields to Neural Networks After translating some of math’s complicated equations, researchers have created an AI system that they hope will answer even bigger questions. " https://www.quantamagazine.org/symbolic-mathematics-finally-yields-to-neural-networks-20200520/?utm_source=Nature+Briefing&utm_campaign=607b58d6d1-briefing-dy-20200522&utm_medium=email&utm_term=0_c9dfd39373-607b58d6d1-44895933

– Alexander Chervov – 2020-05-22T16:46:55.363

1

Mathematical equations are generally expressed in a sequential form known as 'infix notation'. It is characterised by the placement of operators between operands. To make the order of the operations in the Infix notation unambiguous, a lot of parenthesis are needed. Infix notation is more difficult to parse by computers than prefix notation (e.g. + 2 2) or postfix notation (e.g. 2 2 +).

There is a deep learning approach to symbolic mathematics recommended in the research paper by Guillaume Lample and François Charton. They have found an interesting approach to use deep neural networks for symbolic integration and differentiation equations. This paper proposes a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models.

Deep Learning for Symbolic Mathematics This approach is essentially representing mathematical problems in prefix notation. First a symbolic syntax tree is constructed that captures the order and values of the operations in the expression. Second, the tree is traversed from top to bottom and from left to right. If the current node is a primitive value (a number), add it to the sequence string. If the current node is a binary operation, add the operations symbol to the sequence string. Then, add the representation of the left child node (could be recursive). Then, add the representation of the right child node. This procedure resulted in the following expression. We can expect further more advances in this area with the emergence of better symbolic learning models leveraging attention based transformers and other neural symbolic learning models. Recent work by MIT, DeepMind and IBM has shown the power of combining connectionist techniques like deep neural networks with symbolic reasoning. Please find the details in the following article.

The Neuro-Symbolic Concept Learner