47

27

Imagine you show a neural network a picture of a lion 100 times and label with "dangerous", so it learns that lions are dangerous.

Now imagine that previously you have shown it millions of images of lions and alternatively labeled it as "dangerous" and "not dangerous", such that the probability of a lion being dangerous is 50%.

But those last 100 times has pushed the neural network into being very positive about regarding the lion as "dangerous", thus ignoring the last million lessons.

Therefore, it seems there is a flaw in neural networks, in that they can change their mind too quickly based on recent evidence. Especially if that previous evidence was in the middle.

Is there a neural network model that keeps track of how much evidence it has seen? (Or would this be equivalent to letting the learning rate decrease by $1/T$ where $T$ is the number of trials?)

Comments are not for extended discussion; this conversation has been moved to chat.

– nbro – 2020-03-06T01:08:06.657