LSTM regression bias increases when targets go close to 0

2

I've build a LSTM model for time series forecasting. Results are not bad, with a mean normalized error of 7%. However, this normalized bias shows a clear pattern: The closer to 0 the value to predict, the higher the bias, as depicted in the figure below:

enter image description here

NOTE 1: True and Pred values are scaled in the figure for clarity

NOTE 2: The bias is not constant, the un-normalized bias (or bias) shows the same pattern

The model:

* 1 LSTM layer with 256 hidden units and no peep-holes
* Dropout in LSTM with keep_prob = 0.8
* One Dense Layer after the LSTM with 128 units with relu activation
* One Dense Layer after the first dense with 1 unit (predictor)
* Adam optimizer, with learning rate = 0.001
* Loss: Mean Squared Error

Any hint?

ignatius

Posted 2018-11-29T16:19:01.577

Reputation: 1 478

The True values are not visible. Are they nonnegative? – Sean Owen – 2020-05-09T01:31:39.973

Answers

0

I found a solution for the problem: scaling the features and the labels.

By performing min-max scaling over the features and labels, the curves become:

enter image description here

The error is still showing a pattern related to the target value, but the normalized error is now really low...

I think this is a nice example of the benefits of normalizing features...

ignatius

Posted 2018-11-29T16:19:01.577

Reputation: 1 478