7

2

I am new to deep learning and LSTM (with keras). I am trying to solve a multi-step ahead time series prediction. I have 3 time series: A, B and C and I want to predict the values of C. I am training an LSTM feeding 3 steps back data points to predict the next 3 steps in the future. The input data looks like:

```
X = [[[A0, B0, C0],[A1, B1, C1],[A2, B2, C2]],[[ ...]]]
```

with dimensions: `(1000, 3, 3)`

. The output is:

```
y = [[C3, C4, C5],[C4, C5, C6],...]
```

with dimensions: `(1000, 3)`

.

I am using a simple LSTM with 1 hidden layer (50 neurons). I set up an LSTM with keras as:

```
n_features = 3
neurons = 50
ahead = 3
model = Sequential()
model.add(LSTM(input_dim=n_features, output_dim=neurons))
model.add(Dropout(.2))
model.add(Dense(input_dim=neurons, output_dim=ahead))
model.add(Activation('linear'))
model.compile(loss='mae', optimizer='adam')
model.fit(X, y, epochs=50)
```

This model works fine. Now, I'd like to predict the values of B as well (using the same input). So I tried to reshape the output in a similar way as I did for the training that has multiple features:

```
y = [[[B3, C3],[B4, C4],[B5, C5]],[[ ...]]]
```

so that it has dimensions: `(1000, 3, 2)`

. However, this gives me an error:

```
Error when checking target: expected activation_5 to have 2 dimensions,
but got array with shape (1000, 3, 2)
```

I guess the structure of the network needs to change. I tried to modify `model.add(Dense(input_dim=neurons, output_dim=ahead))`

with no success. Should I reshape the `y`

differently? Is the structure of the network wrong?

I am also working on similar problem. Can you please consult me on the way how did you prepare your data in such a way? X = [[[A0, B0, C0],[A1, B1, C1],[A2, B2, C2]],[[ ...]]]? I also have question: when you forecast the next 3 points, did you use the previous 3? But when you want to forecast the 5th did you use the forecasted 4th? Thank you in advance? – osaozz – 2017-12-22T18:15:23.337

If you always feed in the same length and output the same length then you don’t really need any sort of RNN, and may get better results if you do not use them. – kbrose – 2018-07-02T13:43:58.270

Change the last Dense layer's output to 3, it may solve the problem – Kaustabh Ganguly – 2018-07-02T08:01:40.070