Neural Network Timeseries Modeling with Predictor Variables

4

2

Many have shown the effectiveness of using neural networks for modeling time series data, and described the transformations required and limitations of such an approach. R's forecast package even implements one approach to this in the nnetar function.

Based on my reading, all of these approaches are for modeling a single outcome variable based on its past observations, but I'm having trouble finding a description of a neural-network-based approach that also incorporates independent predictor variables (a sort of ARIMAx analogue for neural networks). I've found references to Nonlinear autoregressive exogenous models (NARX), which seem like they should be what I'm looking for, but all the reading I've been able to find talks more about using this approach for multi-step-ahead prediction of a univariate series.

Can anyone point me in the right direction on this? For bonus points, does anyone know of an implementation of what I'm looking for in R?

Kyle.

Posted 2016-07-08T18:36:51.597

Reputation: 1 453

1The nnetar function allows independent predictor variables. Read the help file. – Rob Hyndman – 2016-07-11T08:20:55.190

@RobHyndman: Thanks! I must have missed that. I will revisit it! – Kyle. – 2016-07-11T11:49:34.967

@RobHyndman: It seems I was using v. 6.2, but this feature is in newer versions. Thanks for the direction, I'm a big fan of your work! – Kyle. – 2016-07-11T17:52:32.610

Answers

2

I think this thesis https://dspace.mit.edu/bitstream/handle/1721.1/99565/924315586-MIT.pdf?sequence=1 is a good starting point for building a model for time series forecasting with so called leading indicators and different machine learning models. Basically you need just to training sets where your inputs are past values of the variable you wish to predict plus the additional indicators (i.e. past_value_t-6, past_value_t-5 ..., past_value_t-1, additional_variable_1, additional_variable_2, ..., additional_variable_n) and the output is either single value in future (t_1) or multiple values (t_1, t_2, ... t_n), in case you wish to predict for example the next twelve moths' sales.

It is also possible to use recursive strategy to generate future values for your prediction variable. In this case you generate next value for the prediction value based on your previous predicted value. However, at least based on my experience the first mentioned direct strategy is way more efficient.

Juha

Posted 2016-07-08T18:36:51.597

Reputation: 21

0

I haven't worked on many time series problems, so take my answer with a grain of salt. Having said that, it's not clear why you can't you just treat all predictors the same, in other words treat previous values of the univariate target series and independent predictors as regular predictors. I don't see what is special about the element of time that makes it especially difficult to train/learn like any other type of ordinary predictor. If time is truly important, then the neural network will demonstrate that importance by weighting your multivariate lagged variables accordingly during training. This sort of reasoning would apply to any type of model, not just neural networks. For example, the same could be said of gradient boosting.

Ryan Zotti

Posted 2016-07-08T18:36:51.597

Reputation: 3 849