1

I am looking for a rigorous mathematical proof for finding the several local minima of the Hopfield networks. I am searching for something rigorous, a demonstration, not just let the network keep updating its neurons and wait for noticing a stable state of the network.

I have look virtually everywhere but I found nothing.

Is there a rigorous proof for Hopfield minima ? Could you give me ideas or resources ?

Thank you in advance

Thank you for your answer. Sorry I if was not clear but I intend my question the way I asked it. I understand why Hopfield net (the proof) converge to a local minima, a stable state etc... Precisely I already read the resources that you mentionned and they only deal with that issue (I found Rojas book very useful thou!) – ladangvu – 2019-07-22T21:51:47.427

@ladangvu Ok, so what's your question? What do you want more than this? The resources I provided are already quite rigorous, so I really don't get your question. – nbro – 2019-07-22T21:53:38.780

What I am looking for is a demonstration for finding all these stable states. Of course everybody states that they are the patterns learned, their opposite (negative counterpart) and spurious states. But nowhere did I find proof that the patterns learned are local minima of the energy function. – ladangvu – 2019-07-22T21:55:09.740

@ladangvu What do you mean by "demonstration for finding all these stable states"? Have you read the mentioned paper "On the Convergence Properties of the Hopfield Model"? Even in the initial examples, the author gives examples of states that are the local minimum of the energy function (that is, where the Hopfield network reaches a stable state)? Also, have you read the part of my answer where I say "

note that the proofs of convergence of the Hopfield networks actually depend on the structure of the network". – nbro – 2019-07-22T21:58:36.700A mathematical demonstration (as we regularly do in multivariable functions optimization) for determining the local minimizer and minimum of the energy function E. A demonstration like any function that we would like to minimize, for example from my course : f(x,y) = x^3+y^3-3cxy where c belongs to IR*. After calculate the partial derivatives, finding the critical points and and calculate the determinant of the hessian at these points, we found that : (0,0) is a saddle point and (c,c) a local maximizer if c < 0 or a local minimizer if c > 0. – ladangvu – 2019-07-22T22:14:34.827

Yes I read it, I understood these examples. Yes I read it, as I am dealing with Hopfield networks, I assume that W is symetric and with a diagonal full of zero (this is part of the definition of a Hopfield network, the weights w_ij = 0 when i = j. – ladangvu – 2019-07-22T22:18:27.743

$W$ does

not necessarilyhave to be symmetric or the diagonal elements donot necessarilyhave to be zero (See part $B$ of the Introduction section). However, some proofs assume that e.g. the elements of the diagonal are non-negative (but it doesn't mean that they can't be positive) and the matrix is symmetric (see appendix 5 of the paper). – nbro – 2019-07-22T22:44:18.273This is a good job @nbro – Nicola Bernini – 2019-08-25T17:48:10.783