Backprop Through Max-Pooling Layers?

29

12

This is a small conceptual question that's been nagging me for a while: How can we back-propagate through a max-pooling layer in a neural network?

I came across max-pooling layers while going through this tutorial for Torch 7's nn library. The library abstracts the gradient calculation and forward passes for each layer of a deep network. I don't understand how the gradient calculation is done for a max-pooling layer.

I know that if you have an input ${z_i}^l$ going into neuron $i$ of layer $l$, then ${\delta_i}^l$ (defined as ${\delta_i}^l = \frac{\partial E}{\partial {z_i}^l}$) is given by: $$ {\delta_i}^l = \theta^{'}({z_i}^l) \sum_{j} {\delta_j}^{l+1} w_{i,j}^{l,l+1} $$

So, a max-pooling layer would receive the ${\delta_j}^{l+1}$'s of the next layer as usual; but since the activation function for the max-pooling neurons takes in a vector of values (over which it maxes) as input, ${\delta_i}^{l}$ isn't a single number anymore, but a vector ($\theta^{'}({z_j}^l)$ would have to be replaced by $\nabla \theta(\left\{{z_j}^l\right\})$). Furthermore, $\theta$, being the max function, isn't differentiable with respect to it's inputs.

So....how should it work out exactly?

shinvu

Posted 2016-05-12T08:38:12.740

Reputation: 150

Answers

28

There is no gradient with respect to non maximum values, since changing them slightly does not affect the output. Further the max is locally linear with slope 1, with respect to the input that actually achieves the max. Thus, the gradient from the next layer is passed back to only that neuron which achieved the max. All other neurons get zero gradient.

So in your example, $\delta_i^l$ would be a vector of all zeros, except that the $i^*$th location will get a values $\left\{\delta_j^{l+1}\right\}$ where $i^* = argmax_{i} (z_i^l)$

abora

Posted 2016-05-12T08:38:12.740

Reputation: 398

3Oh right, there is no point back-propagating through the non-maximum neurons - that was a crucial insight.

So if I now understand this correctly, back-propagating through the max-pooling layer simply selects the max. neuron from the previous layer (on which the max-pooling was done) and continues back-propagation only through that. – shinvu 2016-05-13T05:35:39.633

3

Max Pooling

So suppose you have a layer P which comes on top of a layer PR. Then the forward pass will be something like this:

$ P_i = f(\sum_j W_{ij} PR_j)$,

where $P_i$ is the activation of the ith neuron of the layer P, f is the activation function and W are the weights. So if you derive that, by the chain rule you get that the gradients flow as follows:

$grad(PR_j) = \sum_i grad(P_i) f^\prime W_{ij}$.

But now, if you have max pooling, $f = id$ for the max neuron and $f = 0$ for all other neurons, so $f^\prime = 1$ for the max neuron in the previous layer and $f^\prime = 0$ for all other neurons. So:

$grad(PR_{max\ neuron}) = \sum_i grad(P_i) W_{i\ {max\ neuron}}$,

$grad(PR_{others}) = 0.$

patapouf_ai

Posted 2016-05-12T08:38:12.740

Reputation: 246