3

One Nash equilibrium of every GANs model has is when the generator creates perfect samples indistinguishable from the training data and the discriminator just output 1 with probability 1/2. And I think this is the desirable outcome since we are most interested in the generator part of the GAN model. I know that we probably try to converge to this equilibrium with some hacks in training such as "mode collapse avoidance" and so on. But is there any theoretical work trying to go in another ways (say, by reduce the number of Nash equilibria somehow?)