I understand why deep generative models like DBN ( deep belief nets ) or DBM ( deep boltzmann machines ) are able to capture underlying structures in data and use it for various tasks ( classification, regression, multimodal representations etc ...).
But for the classification tasks like in Learning deep generative models, I was wondering why the network is fine-tuned on labeled-data like a feed-forward network and why only the last hidden layer is used for classification?
During the fine-tuning and since we are updating the weights for a classification task ( not the same goal as the generative task ), could the network lose some of its ability to regenerate proper data? ( and thus to be used for different classification tasks ? )
Instead of using only the last layer, could it be possible to use a partition of the hidden units of different layers to perform the classifications task and without modifying the weights? For example, by taking a subset of hidden units of the last two layers ( sub-set of abstract representations ) and using a simple classifier like an SVM?
Thank you in advance!