Keras: Softmax output into embedding layer

2

I'm trying to build an encoder-decoder network in Keras to generate a sentence of a particular style. As my problem is unsupervised i.e. I don't have the ground truths for the generated sentences, I use a classifier to help during training. I pass the decoder's output into the classifier to tell me what style the decoded sentence is.

The decoder outputs a softmax distribution which I was intending to feed straight into the classifier but I realised that it has an embedding layer, which in Keras, accepts only integer sequences not softmax / one-hot. Does anybody know a way to remedy this?

Thanks

Physbox

Posted 2018-07-24T19:57:54.097

Reputation: 207

Answers

0

You are describing a variation of Generative adversarial networks (GANs). The Generator, the part of the model that creates new examples, needs to output complete sentences. The Discriminator, the classifier, takes those complete sentences as input.

In particular, you are describing a DiscoGAN which discovers cross-domain relations given unpaired data. DiscoGANs have been applied to the style of images.

Brian Spiering

Posted 2018-07-24T19:57:54.097

Reputation: 10 864

You're right, this is similar to GANs. From my knowledge, the difference is that the GAN discriminator checks for 'real' vs. 'generated' whereas I am trying to help decoder produce sentences of the desired styles hence what I am using is a classifier that checks if the output is 'style A' or 'style B'. – Physbox – 2018-07-25T21:29:38.230

DiscoGAN classifiers can learn different styles. I have edited my answer. – Brian Spiering – 2018-07-25T23:04:47.917

Thanks for sharing. How would I implement this in Keras though? As described in my question, what I am actually concerned with is the embedding layer which only takes a sequence of integers. Should I convert the softmax output from the decoder into integers? If so, won't this discrete step prevent training via backprop? – Physbox – 2018-07-26T00:05:10.377