I need some help understanding the second shortcoming of the sigmoid activation function as described in this video from Stanford. She says that because the output of sigmoid is always positive, that any gradients flowing back from a neuron following a sigmoid will all share the same sign as the upstream gradient flowing into that neuron. She then says that a consequence of these weight updates sharing the same sign is a sub-optimal zigzag gradient descent path.
I understand this phenomenon when zoomed in on a single neuron. However, since upstream gradients flowing into a layer can be of different signs, it's still possible to get a healthy mixture of positive and negative weight updates in a layer. Therefore, I'm having trouble understanding how using sigmoid results in this zigzag descent path, except for in the case where the upstream gradients are all of the same sign (which intuitively seems uncommon). It seems to me that if this suboptimal descent is important enough to be highlighted in the lecture, that it must be more common than that.
I'm wondering if the issue is "reduced entropy" among the weight updates, rather than all weight updates in the network sharing the same sign. That is, zigzagging in a subset of the dimensions. For example, say a network using sigmoid has four weights in a layer with two neurons: w1, w2, w3, and w4. The updates to w1 and w2 could be positive, while the updates to w3 and w4 could be negative if the two upstream gradients differ in sign. However, it wouldn't be possible for w1 and w3 to be positive, and w2 and w4 to be negative. Is this the limitation of sigmoid that the Stanford lecture is referring to, assuming the second combination of weight updates was the optimal one?