Why does the discriminator minimize the cross-entropy while the generator maximize it?

1

In his original GAN paper Goodfellow gives a game theoretic perspective for GANs:

\begin{equation} \underset{G}{\min}\, \underset{D}{\max}\, V\left(D,G \right) = \mathbb{E}_{x\sim\mathit{p}_{\textrm{data}}\left(x \right)} \left[\textrm{log}\, D \left(x \right) \right] + \mathbb{E}_{z\sim\mathit{p}_{\textrm{z}}\left(z \right)} \left[\textrm{log} \left(1 - D \left(G \left(z \right)\right)\right) \right] \end{equation}

I think I understand this formula, at least it makes sense to me. What I don't understand is that he writes in his NIPS tutorial:

In the minimax game, the discriminator minimizes a cross-entropy, but the generator maximizes the same cross-entropy.

Why does he write that the discriminator minimizes the cross-entropy while the generator maximizes it? Shouldn't it be the other way around? At least that is how I understand $\underset{G}{\min}\, \underset{D}{\max}\, V\left(D,G \right)$.

I guess this shows that I have a fundamental error in my understanding. Could anyone clarify what I'm missing here?

c_student

Posted 2019-10-24T10:17:46.843

Reputation: 113

Answers

1

Your mistake is that you think that the referenced $V(D,G)$ is the deifinition of the cross entropy! Indeed, the cross entropy is defined base on the negative value of the $V(D,G)$. Hence, if you consider the minus behind the $V(D,G)$ ($-V(D,G)$) the sentence will be meaningful.

OmG

Posted 2019-10-24T10:17:46.843

Reputation: 1 020