## How to evaluate performance of a time series model?

4

2

I trained a LSTM network on a time series dataset. Predictions seem to follow the dataset. In fact, they are nearly a right shifted form of real values. Thus, in my opinion, it doesn't provide any predictive capability since only shifting a signal in time results in low RMSE but is not useful.

How to properly evaluate a time series model?

2your testPred plot doesn't start at zero. Are you sure you're plotting it right? – Mohammad Athar – 2017-03-02T16:11:41.743

@MohammadAthar, testPred is the forecast. There needs to be some amount of data before making a prediction, which is why testPred does not start at 0. – Hobbes – 2017-05-01T15:22:17.397

@Horacet not sure why you're singling me out for this info, since I just asked if the data are plotted right – Mohammad Athar – 2017-05-01T20:00:51.320

1@MohammadAthar I meant to address the author of the post. Sorry. – horaceT – 2017-05-01T20:09:25.220

@Mustafa You have to provide a lot more details about your model and data before anyone could help you. First, is the response just an univariate time series? what's your predictors that got fed into the LSTM? is it just $y_t$ lagged by a few time steps? what's the LSTM arch? – horaceT – 2017-05-01T20:11:25.827

1

The best summary on evaluating time series forecast is probably explained in detail on Rob Hyndman's site. I typically use the mean absolute percentage error which is baked in Keras. However, what I found in a different setting is that the MAPE prevents the neural network from converging if combined with the Adam optmization. I had much better success with the rooted mean square error (RMSE). Since you have poor experience with that maybe you could use the symmetric MAPE.

1My problem with examples of time series forecasting is that they never seem to actually forecast. For example, on Rob Hyndman's site there is the example of forecasting beer production. The forecasted results are essentially just the last 'season'. This could be easily predicted by eye. What advantage is there to modelling something that is very clear by looking at the data itself? I'm interested in this generally, not saying the answer is bad. – Hobbes – 2017-05-01T15:44:33.390

@Stereo RH has done a lot of great works on time series forecasting, but when it comes to forecasting with state-of-the-art deep learning models, such as LSTM recurrent neural nets, his techniques and approaches aren't very relevant. Whether MAPE, MAD, RMSE, or MSE, it all depends on how well behave the individual data points are. There is absolutely no general rule here. – horaceT – 2017-05-01T20:05:09.397

1@Hobbes I up-voted your comment. Most time series models have little forecasting power. They just spit out either 1) the last value, 2) the mean of the time points corresponding to the historic periodicity. – horaceT – 2017-05-01T20:08:01.693

@horaceT, Thanks for commenting. I've had a few time series problems so far that I have really struggled with and haven't found reliable solutions yet. I'm always interested in time series questions though. – Hobbes – 2017-05-01T21:03:28.857

@horaceT I fully agree with you that it very much depends on the data points what error measure is most useful. I have seen cases where my team decided to penalize predictions with a wrong sign heavier. Just to show that it is essential to evaluate case by case what a 'good' measure is. – Stereo – 2017-05-02T08:54:51.900

0

As JQ Veenstra has pointed out your method of evaluation depends a lot on the particular type of time series model that you are estimating. Have a look at the following points.

Usually you should have a set of residuals in your model that are uncorrelated. You can test that. You can test the forecasting ability of your model by starting with a subset of the data recursively estimate the model and look at the errors when forecasting each re-estimated model. For general guidance on forecasting I would recommend Granger, Clive W. J. and Paul Newbold (1986) -Forecasting Economic Time Series - Academic Press 1986 which is a bit dated but covers well many aspects of forecast evaluation. Elliott, G. and Timmermann, A (2016). Economic Forecasting, Princeton University Press is, perhaps a little mathematical but provides a comprehensive coverage of forecasting. The references to specific areas in this may give you more guidance on the evaluation of specific forecalting methods.