I have a pretty good understanding of regular autoencoders and, to a certain extent, of variational autoencoders, where the latent representation is forced to follow specific probabilistic distributions. From what I understand variational autoencoders are used as generative models by randomly sampling the learned latent space distributions to obtain a new output.
But in the case of denoising variational autoencoders, I guess that they are being fed with a noisy input whose missing parts will be generated on the output layer. In this case, is there a sampling operation during inference ? Is this generative process deterministic ? If I try to denoise twice a noisy input, will I get the same output ? Or will I get different good denoised possibilities according to the learned distributions ?