I know the concept of a neural network, and I followed the Machine Learning course by Andrew Ng on Coursera, so I also coded some simple ones. However, I miss all the new tricks which are useful to prevent overfitting of Deep networks, such as for example:
- using ReLU instead than tanh neurons
- using dropout
- more advanced learning methods than just vanilla stochastic gradient descent
etc. I would like to follow a MOOC which tells me how to use Keras for Deep Learning (I like Keras very much because IMO is much easier to understand than other packages, but I'm open to suggestions). I would also be content with a book, but I'd really prefer a MOOC. Doesn't have to be free. Can you indicate me one? The application is Data Science, but generic Deep Learning would do.
EDIT: to provide more context to the question, my main applications would be Internet of Things Analytics, i.e., applications on cloud platforms which collect real-time, streaming sensor data from industrial machines and allow to estimate their actual performance, predict the probability of a failure and the time before it happens, detect anomalies, etc. I don't need to develop the cloud platform: I just need to develop the "core" Analytics. Think of it as just applying Deep Learning to Time Series or to Classification problems. However, methods which can easily be retrained when new data arrive, without having to go through the full dataset again, would be preferred.