When should the bias b be updated with weights w and when should it be updated seperately?


It seems in some Machine Learning models, the bias term $b$ is updated just like other weights $w_i, i=1...n$. For example, in Logistic Regression, using SGD, $b \ \text{or} \ w_0$ is updated with: $$b \ += LearningRate *(target - w^T*x) $$

However in SVM, $b$ is updated sperately from $w$ by using support vectors $y_s$ (i.e., $y_s(w^Tx_s+b)=1$): $$b=y_s-\sum_{i=1}^{n}\alpha^*_iy_ix_i^Tx_s$$ For the ML models with the linear model pattern $y=w^Tx+b$, is it only SVM updates bias $b$ seperately from $w$?


Posted 2018-06-18T10:43:55.860

Reputation: 321

No answers