I'm implementing a C3D-inspired neural network for human emotion recognition, the problem I'm facing is that altough the cost function is decreasing, for both training and validation sets, I do not appreciate any improvement in terms of accuracy, for neither of boths sets.
My cost function is the cross-entropy between the logits (output of the last layer) and the correct prediction
def tower_loss(name_scope, logit, labels): xent = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logit,labels=labels) cross_entropy_mean = tf.reduce_mean(xent) return cross_entropy_mean
Then, the optimizer uses the ADAM algorithm for minimizing the cost function as follows
loss = tower_loss(scope, logit, labels_placeholder) train = tf.train.AdamOptimizer(1e-4).minimize(loss)
Although I'm seing the cost function decreasing, I haven't seen any improvement in the classification.
- The xentropy of the validation set and the training set is not diverging.
- The xentropy looks like is on the way of converging to 0.
- The accuracy is not wrongly implemented (I see in the screen the outputs and the value is correct)
- The network has been training now for 57.6K iterations (not much, but enough to see some increment in the performance, or not?)
Any extra question you need to aske, please feel free, or missing information, please ask it. Thanks a lot for all your time, and helping me with this problem.