## How to use neural network's hidden layer output for feature engineering?

3

2

I was wondering how can we use trained neural network model's weights or hidden layer output for simple classification problem, and then use those for feature engineering and implement some boosting algorithm on the new engineered features.

Suppose,if we have 100 rows with 5 features (100x5) matrix.

data:
X                 Y
x1,x2,x3,x4,x5          y1  y2
0,1,2,3,4               0   1
3,2,5,6,4               1   0


Network: 2 layers, input and softmax output, compile using cross entropy .

Can we utilise trained weights or hidden layer output of above network and use it for feature engineering on original dataset and then apply some boosting algo on modified dataset and will it increase accuracy ?

Are you sure you mean weights? Your example network would have 12 weights in the first layer (connecting input features to the hidden layer), and 3 in the second layer (connecting hidden layer to output) - including bias terms. I think you mean activations (i.e. the outputs of the 2 neurons in the hidden layer). Could you also clarify how your network has been trained? – Neil Slater – 2017-10-23T12:00:27.963

1You normally pass a lot of low level features to the NN (e.g. raw pixel values) and have the NN learn its own high level features instead of hardcoding high level features (e.g. edge detection filters). – CodesInChaos – 2017-10-23T12:31:15.090

@NeilSlater - you are right, my mistake. it would be 12 weights including bias. So, idea is to train a neural network on simple numeric data, (i'll update the question) and predict probabilities using softmax. Idea is to use neural network just for feature engineering – CYAN CEVI – 2017-10-23T13:38:17.503

@CodesInChaos - i'm not talking about image classification. simple numeric data. and i'm asking that if we can utilise those learned high level features that you mentioned to perform feature engineering on original dataset and apply some other classification algo. – CYAN CEVI – 2017-10-23T13:41:48.123

So you want to take output of NN (the predicted values of $y_1$ and $y_2$) on some examples where you have $x_1 ... x_5$ defined, and feed it into another model per example? Or maybe the output from one of the middle layers? Or do you really want to use the NN weights as a feature? – Neil Slater – 2017-10-23T13:41:50.263

Not the predicted values, that would be like model stacking. i'm asking if it is possible that hidden features(weights) learned by neural network while training can be extracted and then utilise those high level features to change original features (feature engineering) like creating a new feature x6=x1/x2 but using those high level features from the trained network – CYAN CEVI – 2017-10-23T13:44:58.933

The high level features are not "weights". Do you mean you would like to take the output from a hidden layer and use it as a feature per example? That is possible, and actually something that is done a lot in autoencoders and other NN architectures. – Neil Slater – 2017-10-23T13:45:32.690

oh, so what would they be ? can you point in the direction of a writeup that elaborates a bit on this. i was under the impression that all the learning is represented by weights. – CYAN CEVI – 2017-10-23T13:47:27.163

yes, that's close. does any high level library like keras etc allows us to extract those or we have to write our custom network using tensorflow etc in order to achieve that ? – CYAN CEVI – 2017-10-23T13:49:57.507

If your main network is in TensorFlow or Theano, then it is really easy. Keras doesn't support it directly from the base API, but this might help: https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer - if you clarify your question is about the hidden layer activations and not about using weights as features, you may get some other answers.

– Neil Slater – 2017-10-23T13:56:13.623

i'll do that. Thanks for the help ! highly appreciated. have you tried something like this ? using the hidden layer output, then perform gradient boosting or some traditional model on it ? – CYAN CEVI – 2017-10-23T14:01:38.960

– Neil Slater – 2017-10-23T15:05:29.177

2

TL;DR: Yes. You can (iiuc)

Longer Version: In fact, this is what many popular algorithms like Word2Vec and AutoEncoders do. (With respect to hidden layer outputs)

Word2Vec: Given an input word ('chicken'), the model tries to predict the neighbouring word ('wings') In the process of trying to predict the correct neighbour, the model learns a hidden layer representation of the word which helps it achieve its task.

Finally, we just remove the last layer and use the hidden layer representation of the word as its $N$ dimensional vector.

So basically, we feature engineered the word vectors.

For autoencoders, It takes in $X$ as the input and tries to predict $X$ again, in the process learning a latent representation of the input signal X. The input hidden representation in Layer $L2$ can be used in other tasks. (Note: here X and X hat are the same)

(In fact, you can use features learned by a CNN and feed them into an SVM and get good results)

Relevant to you: You can train your model on the given data and finally chop off the last prediction layer and use the output of the intermediate layers as features. I believe this should work because it works for so many tasks which I explained above.

Source:

1. Learn Word2Vec by implementing it in Word2Vec : an article by me explaining word2vec. (Shameless self-advertising here but I feel the article is good and relevant)

2. Andrew Ng's unsupervised feature learning website