Does anyone could help me to understand what is the autoencoders?

2

1

Does anyone could help me to understand what the autoencoders means?

What we expect is that the outputs are equal to the inputs, then why we need to do that? It doesn't make any sense to me.

I find some interpretation that it learned how to reconstruct the input data, does that mean that we could just pick some of the pixels from the origin picture and then reconstruct the whole original picture? If so, it still makes no sense to me, because the reconstructing part of the model is from the hidden layer to the output layer, we cannot just put the selected data into the hidden layer, cuz the input data of the hidden layer are combinations of the whole raw data from the input layer.

Thanks in advance.

Jakoer

Posted 2016-10-27T13:54:29.657

Reputation: 23

Question was closed 2016-10-28T14:03:37.180

Welcome to Data science SO! Check wikipedia. This question is very broad and likely to get closed.

– Stereo – 2016-10-28T08:00:22.767

Answers

4

Autoencoders are a neural network solution to the problem of dimensionality reduction.

The point of dimensionality reduction is to find a lower-dimensional representation of your data. For example, if your data includes people's height, weight, trouser leg measurement and shoe size, we'd expect there to be some underlying size dimension which would capture much of the variance of these variables. If you're familiar with Principal Component Analysis (PCA), this is another example of a dimensionality reduction technique.

Autoencoders attempt to capture a lower-dimensional representation of their data by having a hidden "bottleneck" layer which is much smaller than the dimensionality of the data. The idea is to train a neural network which throws away as much of its dimensionality as possible and can still reconstruct the original data.

Once you have an autoencoder which performs well, by observing the activations at the bottleneck layer, it's possible to see how an individual example scores in each of the reduced dimensions. This may allow us to begin to make sense of what each of the dimensions represents. One can then use these activations to score new examples on this set of reduced dimensions.

R Hill

Posted 2016-10-27T13:54:29.657

Reputation: 927

1Good answer, but missing a key use of auto-encoders, which is to pre-train using unlabelled data - sometimes called semi-supervised learning. This can be very useful if you have collected a lot of data, but only some is labelled. – Neil Slater – 2016-10-27T16:16:17.440

2

Let me insert 2¢...

Generally speaking "autoencoding" is a lossy compression technique, although not very useful due to being data-specific (autoencoder trained on cats not very useful for cars).

In practice it is used for:

  • data denoising
  • dimensionality reduction
  • unsupervised pre-training of feature extracting parts of more complex networks
  • and... generating new unseen samples from seen ones (!)

The latter is possible with a variety called Variational Autoencoders (VAE), where you impose some constraints on the compressed representation being learned that force it to represent a set of variables that model the probability distribution that represents input data.

In other words in VAE each "bottleneck" variable represents something meaningful about the input (think "color of a cat"). Thus changing this representation produces meaningful output on decoder part of AE.

Mikhail Yurasov

Posted 2016-10-27T13:54:29.657

Reputation: 696