Algorithms that connect neurons to previous layers as well as next



Are there any algorithms, or any evidence to decide or to suggest it would be better to connect a neuron node in a layer l, in a neural network to particular nodes in the previous layer l-1 of the neural network as well as to particular nodes in the next layer l+1 of the neural network?

This is obviously contrived, but here is a illustration of what I mean. enter image description here The thick line with arrow indecates an edge that leads to l-1 layer neuron


Posted 2017-12-31T19:14:13.233

Reputation: 434

3Yes. Feedback ANN and Recurrent ANN are types of what you want. – OmG – 2018-01-01T08:11:47.210

1Maybe make your question clearer with a diagram? I was not imagining RNNs as an answer, but ResNets with skip connections. Both RNNs and skip connections are useful, in very different ways, and could be explained in an answer. – Neil Slater – 2018-01-01T13:15:15.323

1Thanks for adding the diagram. This is closer to RNN in design as it involves loops, and thus necessarily some kind of time step parameter in order to decide which value to feed. I'm not sure how much that kind of feedback connection has been studied, and what name(s) it goes under though. – Neil Slater – 2018-01-02T10:54:07.167

1@OmG It looks like recurrent ANNs are the answer the question author was looking for, thanks! Would you like to expand your comment into an answer with some background/explanation? – Ben N – 2018-01-02T18:53:59.637

No answers