I've read a couple of papers about kernel initialization and many papers mention that they use L2 regularization of the kernel (often with $\lambda = 0.0001$).
Does anybody do something different than initializing the bias with constant zero and not regularizing it?
Kernel initialization papers
- Mishkin and Matas: All you need is a good init
- Xavier Glorot and Yoshua Bengio: Understanding the difficulty of training deep feedforward neural networks
- He et al: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification