## How to actually teach the ANN the resulting weights of different training inputs?

3

I thought I have implemented the code (from scratch, no library) for an artificial neural network (first endeavour in the field). But I feel like I miss something very basic or obvious.

To make it short: code works for a single pair of in-/out-values but fails for sets of value pairs. I do not really understand the training process. So I want to get this issue out of the way first. The following is my improvised training (aka all that I can think of) in pseudocode.

trainingData = [{in: [0,0], out:[0]}, {in: [0,1], out:[0]}, ...];
iterations = 10000

network = graphNodesToNetwork()
links = graphLinksToNetwork()
randomiseLinkWeights(links)

while(trainingData not empty) {
for(0<iterations) {
set = trainingData.pop()

updateInput(network, set.in)

forwardPropagate(network, links)

linkUpdate = backPropagate(network, links, set.out)

updateLinks(linkUpdate, links)}
}


Is this how it is supposed to work? Do you feed in your training data set by set (while-loop)?

Edit 1: because my final comment did distract from the issue at hand.

Edit 2: less wordy, more code-y

Who said it pushes in the opposite direction? It may or it may not..It may also push it in a skewed direction..Also backprop with momentum exist for problem of excessive weight oscillation – DuttaA – 2018-06-22T07:31:15.100

Also to be noted how do you learn..Learning a new mathematical formulae does not necessarily null and void ur previous knowledge – DuttaA – 2018-06-22T07:34:06.457

Removed my comment for clarity. – Col. Cool – 2018-06-22T08:25:00.257

You are asking the question a bit vaguely...can you make it more mathematical in nature? – DuttaA – 2018-06-22T10:21:42.677

Do you mean in terms of what I have set up so far or do you mean in terms of describing my problem? I am afraid you mean the latter which could be an issue. Would pseudo code be acceptable? Hold on... Edited question. – Col. Cool – 2018-06-22T11:25:54.563

No it's not advisable to post pseudocodes in this site...All you need to do is describe your question a bit more definitively..Like what you really want to know – DuttaA – 2018-06-22T14:00:18.223

## Answers

2

Your network must have something which persist like weights add bias

Your new implementation would be like this :

trainingData = [{in: [0,0], out:[0]}, {in: [0,1], out:[0]}, ...];
iterations = 10000

network = graphNodesToNetwork()
links = graphLinksToNetwork()
randomiseLinkWeights(links)
weights = []

while(trainingData not empty) {
for(0<iterations) {
set = trainingData.pop()

weights = updateInput(network, set.in)

forwardPropagate(network, links)

linkUpdate = backPropagate(network, links, set.out, weights)

updateLinks(linkUpdate, links)}
}


Inshort retain weights, and backprapogate.

1

It looks like you are training you model 10,000 times on one piece of data, and then dropping that piece and moving to the next.

This will not work: the model will become extremely good at learning one piece of data, but will then forget about it when optimizing for the next piece.

Instead, either pick one example at random in each iteration, or compute the gradient for all 4 examples and just update in that direction instead.