12

6

I'm trying to use LSTM on time-series data in order to generate future sequences that looks like the original sequences in term of values and progression direction. My approach is:

- train RNN to predict a value based on 25 past values then use the model to recursively generate future predictions by appending the predicted values to the original sequence and shift the old values ...

Playing with LSTM cells, I found that the model is not able to learn to generate sequences that looks like the original data. It only predict next value then start converging to an 'equilibrium' or static value which is the same whatever the input sequence is.

I'm wondering if Stateful LSTM would help to learn better from past values and try to predict something close to what it have already? My goal here is to generate sequences that looks like something that the model have seen already.

Please let me know if I'm missing something or if you had a similar situation and you found the best approach to generate timeseries sequences that looks like what the model learned in the past.

I don't have an answer but would also like to know this... My results also seem to predict on an equilibrium when predicting multiple steps into the future. Did you succeed in using a stateful LSTM for this? Ps. Can't comment due to new account w/ no rep – repoleved – 2018-03-31T23:48:09.050

I still have the issue with stateful LSTM – Hastu – 2018-04-02T01:11:31.990

I'm in the same boat. I have read that Mixture Density Networks solve this problem... – duhaime – 2018-10-19T22:09:20.613