Here, I read the following:
The first key to understanding is that HTM relies on data that streams over time (...) By contrast, conventional deep learning uses static data and is therefore time invariant. Even RNN/LSTMs that process speech, which is time based, actually do so on static datasets.
On the other hand, on I read the following:
We can assert that our LSTM Autoencoder is a good weapon to extract importan unseen features from time series
The main purpose is to detect anomalies emerging through time before they happen.