Why my loss is negative while training SAE?

1

3

I am using loss='binary_crossentropy' here is my code:enter image description here

I tried to increase number of training image and Epoch ,but that did not help me.

input_img = Input(shape=(28, 28, 1))

x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(input_img)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)

encoded = MaxPooling2D((2, 2), border_mode='same')(x)

x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(16, 3, 3, activation='relu', border_mode='valid')(x) 
x = UpSampling2D((2, 2))(x)

decoded = Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same')(x)

autoencoder = Model(input_img, decoded)    
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')    
autoencoder.fit(x_train, x_train, nb_epoch=10, batch_size=500,
               shuffle=True, validation_data=(x_test, x_test), verbose=1)

sp_713

Posted 2017-06-01T10:44:17.530

Reputation: 85

Answers

2

Use a linear output and mean squared error loss, assuming you are predicting normalised pixel intensity values.

Cross-entropy over sigmoid output layer activations can do odd things when the values are not strictly in $\{0,1\}$, depending on implementation.

Neil Slater

Posted 2017-06-01T10:44:17.530

Reputation: 24 613

@sp_713: Worth checking you have scaled how you think you have. Perhaps show normalisation code in the question? – Neil Slater – 2017-06-01T12:08:08.073

decoded = Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same')(x) this sigmoid is helping to keep last layer values [0,1] – sp_713 – 2017-06-01T12:09:26.883

@sp_713: Yes. That is the output activation layer, which is what you take the loss over. You just said "activation is relu not sigmoid" . . . but it clearly is sigmoid where it counts. You really do want linear activation and mean squared loss here. Using cross entropy where the targets are not either 0 or 1 is not working (this might be a Keras issue). – Neil Slater – 2017-06-01T12:14:24.663

thanks @Neil Slater now it's working after 5 epoch getting loss near to 0.2 my input was not normalised. – sp_713 – 2017-06-01T12:23:57.963