Of my understanding mode-collapse is when there happen to be multiple classes in the dataset and the generative network converges to only one of these classes and generates images only within this class. On training the model more, the model converges to another class.
In Goodfellows NeurIPS presentation he clearly addressed how training a generative network in an adversarial manner avoids mode-collapse. How exactly do GAN's avoid mode-collapse? and did previous works on generative networks not try to address this?
Apart from the obvious superior performance (generally), is the fact that GAN's address mode-collapse make them far preferred over other ways of training a generative model?