I'm currently preparing for a paper in which I will discuss the ability to predict the most likely application to be opened by the user at the given time.
The application will collect information and learn to predict which application that is most likely to be run based on the learned patterns.
I'm collecting the following features for any given program run on the machine:
- Time of day (To predict if the program is being run)
- Day of week (To learn when it is used)
- Most recently opened program (To see connections between programs)
- CPU load (Read below)
- GPU load (Read below)
- Memory usage (To connect programs to heavy usage)
- Screen use (To see if the program has connections with an active screen)
- Last mouse movement (To see if the program is actively used when it is run)
- Last keyboard usage (Same as above)
- App running (Whether or not the application is running)
The mentioned features should allow for some seasonality to be present and should therefore yield some interesting results.
When the data has been collected, An ANN will be trained with the normalized inputs and each application as a separate output representing the possibility of the program being run. (0-1)
Is the proposed method adequate for such a task? I've read about using Bayesian classifiers but it doesn't seem like it's what I'm after.
I've also read about this being a "time series" task - something I couldn't quite wrap my head around but I assume I am exploiting through the proposed method. Is this a time series? Does it have to be treated as such?
Should each application be present as a separate output or should the current application be fed in as an input, with the output presenting the possibility of the given app being run?