What is the difference between and Embedding Layer and an Autoencoder?



I'm reading about Embedding layers, especially applied to NLP and word2vec, and they seem nothing more than an application of Autoencoders for dimensionality reduction. Are they different? If so, what are the differences between them?


Posted 2019-06-21T15:52:36.537

Reputation: 4 928



Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)

Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.

Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.

Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.


Posted 2019-06-21T15:52:36.537

Reputation: 780


Embedding layer, is used in auto-encoders to construct word2vec.

Embedding layer, are a type of layer, used in Deep Learning. You can find others here. Auto-encoders, are a type of architecture, where embedding layers are used. Using these architectures, one can calculate word2vec. word2vec values are calculated when words are fed into the auto-encoders.


Posted 2019-06-21T15:52:36.537

Reputation: 63