Neural network outputting same result for all inputs

1

I'm building an encoder-decoder neural network in Keras for sequence generation. The specific task is to try and change the styles of the text.

Both my encoder and decoders are LSTMs with latent dimensions of size 50. Also, the inputs to the network are one-hot encodings. I'm training the network using a couple of thousand data points and for over 100 epochs. The optimiser I use is rmsprop.

The accuracy and loss seem to decrease over time but during the inference stage, I find that every input sentence results in the same output. This is with a greedy sampling strategy.

I've played with different hyperparameters e.g. changing latent dims, batch-size, different optimisers etc. but to no avail. What strategies can I employ to assess whether this is a coding bug or an issue with the network?

Physbox

Posted 2018-07-26T16:39:05.017

Reputation: 207

Try to use leaky relu if you are using relu. – Media – 2018-07-26T16:46:46.657

I'm using the Keras defaults for LSTM which is tanh. For the final dense layer I just have a softmax function. – Physbox – 2018-07-26T18:17:21.163

How many instances do you have in your training set? – JahKnows – 2018-07-27T03:43:15.000

My training set has 2000 data points – Physbox – 2018-07-31T10:39:38.150

No answers