There is a nice plot in the β-VAE article that shows quality of latent space code:
Is there a general way to visualize or analyze latent space code dimensions so that is would be clear if they are too entangled or a mess?
The dataset consists of Gaussian blobs presented in various locations on a black canvas. Top row: original images. Second row: the corresponding reconstructions. Remaining rows: latent traversals ordered by their average KL divergence with the prior (high to low). To generate the traversals, we initialise the latent representation by inferring it from a seed image (left data sample), then traverse a single latent dimension (in [−3, 3]), whilst holding the remaining latent dimensions fixed, and plot the resulting reconstruction. Heatmaps show the 2D position tuning of each latent unit, corresponding to the inferred mean values for each latent for given each possible 2D location of the blob (with peak blue, -3; white, 0; peak red, 3).
This means that to generate such heatmaps one should be able to smoothly move the blob while knowing it position. So one should actually know what is the best latent encoding. And the plot is actually a comparison with this best latent encoding.
I'm curious if there is a way to measure or plot "something" that would help to understand the quality of the latent code when I don't know what is the best latent code.