Why KL Divergence instead of Cross-entropy in VAE

3

I understand how KL divergence provides us with a measure of how one probability distribution is different from a second, reference probability distribution. But why are they particularly used (instead of cross-entropy) in VAE (which is generative)?

Bahauddin Omar

Posted 2020-09-24T01:10:41.527

Reputation: 123

1

Quoting from What topics can I ask about here?: "If you have a question about the understanding of a machine learning model and its (theoretical) underpinnings, statistical modeling/analysis or probability theory, please refer to Cross Validated".

– desertnaut – 2020-09-24T13:56:21.510

Answers

1

Answering with some theoretical understanding of Variational auto-encoders.

In the general architecture of encoders and decoders, the encoder encodes the input a latent-space, and the decoder reconstructs the input from the encoded latent space.

However, the Variational auto-encoders (VAE), the input is encoded to a latent-distribution instead of a point in a latent space. This latent distribution is considered to be Normal Gaussian distribution (Which can be expressed in terms of mean and variance). Further, decoders samples a point in this distribution and reconstructs the input. Since, VAE encoder encodes to a distribution than a point in a latent space, and KL divergence is use to measure the difference between the distribution, it is used as a regularization term in the loss function.

Ashwin Geet D'Sa

Posted 2020-09-24T01:10:41.527

Reputation: 609