Could a neural network detect primes?



I am not looking for an efficient way to find primes (which of course is a solved problem). This is more of a "what if" question.

So, in theory: Could you train a neural network to predict whether or not a given number n is composite or prime? How would such a network be laid out?


Posted 2017-05-26T15:15:31.397

Reputation: 383


Take a look at

– VividD – 2017-05-28T13:56:32.493

2If the primes follow a pattern and someone just happens to train a neural network with enough hidden nodes in order to define the classification boundary, I suppose it would work. However, we don't know if that classification exists and even if it did, we would have to prove what the boundary is in order to prove that the neural network did indeed find the correct pattern. – quintumnia – 2017-06-05T07:31:56.757



Early success on prime number testing via artificial networks is presented in A Compositional Neural-network Solution to Prime-number Testing, László Egri, Thomas R. Shultz, 2006. The knowledge-based cascade-correlation (KBCC) network approach showed the most promise, although the practicality of this approach is eclipsed by other prime detection algorithms that usually begin by checking the least significant bit, immediately reducing the search by half, and then searching based other theorems and heuristics up to $floor(\sqrt{x})$. However the work was continued with Knowledge Based Learning with KBCC, Shultz et. al. 2006

There are actually multiple sub-questions in this question. First, let's write a more formal version of the question: "Can an artificial network of some type converge during training to a behavior that will accurately test whether the input ranging from $0$ to $2^n-1$, where $n$ is the number of bits in the integer representation, represents a prime number?"

  1. Can it by simply memorizing the primes over the range of integers?
  2. Can it by learning to factor and apply the definition of a prime?
  3. Can it by learning a known algorithm?
  4. Can it by developing a novel algorithm of its own during training?

The direct answer is yes, and it has already been done according to 1. above, but it was done by over-fitting, not learning a prime number detection method. We know the human brain contains a neural network that can accomplish 2., 3., and 4., so if artificial networks are developed to the degree most think they can be, then the answer is yes for those. There exists no counter-proof to exclude any of them from the range of possibilities as of this answer's writing.

It is not surprising that work has been done to train artificial networks on prime number testing because of the importance of primes in discrete mathematics, its application to cryptography, and, more specifically, to cryptanalysis. We can identify the importance of digital network detection of prime numbers in the research and development of intelligent digital security in works like A First Study of the Neural Network Approach in the RSA Cryptosystem, G.c. Meletius et. al., 2002. The tie of cryptography to the security of our respective nations is also the reason why not all of the current research in this area will be public. Those of us that may have the clearance and exposure can only speak of what is not classified.

On the civilian end, ongoing work in what is called novelty detection is an important direction of research. Those like Markos Markou and Sameer Singh are approaching novelty detection from the signal processing side, and it is obvious to those that understand that artificial networks are essentially digital signal processors that have multi-point self tuning capabilities can see how their work applies directly to this question. Markou and Singh write, "There are a multitude of applications where novelty detection is extremely important including signal processing, computer vision, pattern recognition, data mining, and robotics."

On the cognitive mathematics side, the development of a mathematics of surprise, such as Learning with Surprise: Theory and Applications (thesis), Mohammadjavad Faraji, 2016 may further what Ergi and Shultz began.

Douglas Daseeco

Posted 2017-05-26T15:15:31.397

Reputation: 7 174

”We know the human brain contains a neural network that can accomplish 2., 3., and 4.” Not really. You can’t calculate big numbers efficiently. Our brain holds rulesets which can be used to deduce the results of some calculations, but those rulesets get incrementally slower and, cojoined with our memory, more unreliable the larger the numbers get - and this would be true even if it was implemented in an artificial network. This isn’t comparable to our current understanding of neural networks as a concept. – A. McMount – 2020-06-22T07:48:40.563


I'm an undergraduate researcher at Prairie View A&M university. I figured I would comment, because I just spent a few weeks tweaking a MLPRegressor model to predict the nth prime number. It recently stumbled into a super low minima, where the first 1000 extrapolations outside of the training data produced error less than .02 percent. Even at 300000 primes out, it was about .5 percent off. My model was simple: 10 hidden layers, trained on a single processor for less than 2 hours.

To me, it begs the question, "Is there a reasonable function that produces the nth prime number?" Right now the algorithms become computationally very taxing for extreme n. Check out the time gaps between the most recent largest primes discovered. Some of them are years apart. I know it's been proven that if such a function exists, it will not be polynomial.

Cody K

Posted 2017-05-26T15:15:31.397

Reputation: 11

Welcome to AI.SE! Please note that we allow only answers (as opposed to comments) in the answer section, so I refined your post a bit to focus on addressing the question. For an intro to our site, see the [tour]. – Ben N – 2019-03-29T01:18:30.563

Hi Cody, this wasn't long ago. But I would like to have a chat with you regarding the test you did. Would you be willing to live chat about what you did and what you perceived? I would like to see if there is a possibility to experiment further with this. – mmm – 2019-07-22T18:43:59.713

Did you write this up? I'd love to read a summary if you have it. – cjm2671 – 2020-08-24T16:28:00.207


In theory, a neural network can map any given function (source).

However, if you train a network with the numbers 0 to N, you cannot guarantee that the network will classify numbers outside that range correctly (n > N).

Such a network would be a regular feed forward network (MLP) as recurrency does not add anything to the classification of the given input. The amount of layers & nodes can only be found through trial and error.

Thomas W

Posted 2017-05-26T15:15:31.397

Reputation: 967

1Universal theorems applies to continuous functions on compact subsets. Prime/not prime is not such kind of function. – pasaba por aqui – 2018-08-17T10:59:35.687

1@pasabaporaqui: In this case the primeness function can be approximated well enough by a continuous function with peaks at the values of primes. So the NN might output 90% chance of being a prime for 6.93 - that is clearly nonsense, but if you discretised the inputs and outputs, you don't really care about what the NN would predict for non-integers. I think this answer is basically correct. – Neil Slater – 2018-08-27T09:00:33.610


yes it is feasible, but consider that integer factorization problem is an NP-something problem and BQP problem.

because of this, it is impossible that a neural network purely based on classical computing finds prime number with 100% accuracy, unless P=NP.


Posted 2017-05-26T15:15:31.397

Reputation: 99

As the question explains, check if a number is prime is not a NP problem. – pasaba por aqui – 2018-08-17T10:52:41.950