Tag: regularization

8 What is "early stopping" in machine learning? 2016-08-02T15:53:00.447

6 Why L1/L2 regularization technique did not improve my accuracy? 2018-10-24T20:05:07.130

6 Why is dropout favoured compared to reducing the number of units in hidden layers? 2019-12-11T16:26:11.463

4 How does L2 regularization make weights smaller? 2018-09-23T03:24:56.677

4 Is there a way to ensure that my model is able to recognize an unseen example? 2020-02-24T21:31:25.830

4 Forcing a neural network to be close to a previous model - Regularization through given model 2020-07-23T07:52:57.733

3 L1 Reguarizer in Keras model throwing weight matrix dimension error 2019-09-02T21:50:20.247

3 What is the $\ell_{2, 1}$ norm? 2019-12-30T21:32:37.797

3 Can dropout layers not influence LSTM training? 2020-01-03T15:27:58.663

3 Which is a better form of regularization: lasso (L1) or ridge (L2)? 2020-02-07T19:55:55.050

3 How do I poison an SVM with manifold regularization? 2020-02-18T00:50:39.247

3 Are there principled ways of tuning a neural network in case of overfitting and underfitting? 2020-03-18T11:57:18.443

2 Regarding L0 sparsification of DNNs proposed by Louizos, Kingma and Welling 2018-11-18T19:47:26.853

2 Should I remove the units of a neural network or increase dropout? 2018-12-13T16:37:18.700

2 What is the benefit of scaling the hyperparameter C of an SVM? 2019-04-10T19:22:24.713

2 Regularization of non-linear parameters? 2020-01-10T14:30:46.947

2 How is the gradient with respect to weights derived in batch normalization? 2020-02-27T10:44:16.340

2 Does L1/L2 Regularization help reach an optimum result faster? 2020-05-13T20:17:07.117

2 Why L2 loss is more commonly used in Neural Networks than other loss functions? 2020-07-27T17:57:19.977

1 How can I model regularity? 2019-01-13T20:34:47.680

1 What is the intuition behind the Label Smoothing? 2019-05-04T09:08:23.283

1 Dropout causes too much noise for network to train 2019-07-16T02:58:43.097

1 Do L2 regularization and input normalization depend on sigmoid activation functions? 2020-01-22T07:52:41.587

1 What is relation between gradient descent and regularization in deep learning? 2020-04-01T06:03:10.190

1 If the i.i.d. assumption holds, shouldn't the training and validation trends be exactly the same? 2020-04-15T12:55:00.323

1 Why does L1 regularization yield sparse features? 2020-07-02T08:44:24.823

1 When would bias regularisation and activation regularisation be necessary? 2020-07-18T01:13:28.213

0 Tensorflow neural network - inherent overfitting in certain X and Y distributions? 2020-02-15T22:49:33.670

0 What is the difference between TensorFlow's callbacks and early stopping? 2020-02-18T11:36:05.880

0 Derivation of regularized cost function w.r.t activation and bias 2020-03-21T17:40:27.883

0 Cannot fine-tune L2-regularization parameter 2020-04-29T03:43:03.337

0 Where is L2-regularization term applied 2020-07-06T16:32:41.237

0 Non-trainable regularizer in loss function 2020-07-24T07:14:30.750