Why do hypercube latent spaces perform poorer than Gaussian latent spaces in generative neural networks?



I have a quick question regarding the use of different latent spaces to represent a distribution. Why is it that a Gaussian is usually used to represent the latent space of the generative model rather than say a hypercube? Is it because a Gaussian has most of its distribution centred around the origin rather than a uniform distribution which uniformly places points in a bounded region?

I've tried modelling different distributions using a generative model with both a Gaussian and Uniform distribution in the latent space and the Uniform is always slightly restrictive when compared with a Gaussian. Is there a mathematical reason behind this?

Thanks in advance!


Posted 2020-05-13T13:32:01.890

Reputation: 21

No answers