0

Referring to the blog, Image Completion with Deep Learning in TensorFlow, it clearly says that we would want a generator $g$ whose modeled distribution fits our dataset $data$, in other words, $P_{data}=P_g$.

But, as described earlier in the blog, the space $P_{data}$ is in is a higher-dimensional space, where a dimension represents a particular pixel in an image, making it a $64*64*3$ dimensional space (in this case). I have a few questions regarding this

- Since each pixel here will have an intensity value, will the pdf try to encapsulate a unique pdf for each pixel?
- If we sample the most likely pixel value for each pixel, considering the distributions need not be the same for each pixel, is it not quite likely that the most probabilistic generated image is just noise apart from things like a common background or so?
- If $P_g$ is trying to replicate $P_{data}$ only, does that mean a GAN only tries to learn lower level features that are common in the training set? Are GANs clueless about what its doing?

Addition to Q2 - If not noise should it not produce the exact same image if that image with highest probabilities happens to exist? – ashenoy – 2019-07-19T11:24:27.120

General Addition - Is my understanding even in the right direction in the first place? – ashenoy – 2019-07-19T11:25:12.077

Hi. Please, next time ask one question per post! – nbro – 2019-07-20T13:40:20.143