I am collecting a big number of generated numeric features for the task of unsupervised anomaly detection.
I can assume that all training data is considered normal.
I expect some of the generated features to be characterised with low standard deviation, for example, some features might be always 0 in the train examples. In contrast, I expect that some of these features will deviate in anomaly instances.
As I have a lot of feature, I strive to perform feature reduction / selection. However, using simple feature selection methods, will completely remove the non deviating features, affecting the upcoming detection for the worse.
I was thinking about using stacked auto-encoders for the sake of feature reduction, so that whenever a feature deviates a lot from the stdv it will affect all resulted features - causing a noticeable anomaly.
Will this technique work? if not, why? and what other technique could work for that.
Also, if it does, and I am planning of using deep auto-encoders for the sake of anomaly detection as well, is the first step of feature reduction redundent?