Back to top

me | blogs | notes | tags | categories | feed | home |

regularization



tags: no_tags
categories: machine learning


In machine learning and regularization is the process of adding information in order to solve an ill-posed problem or to prevent overfitting.

Early Stopping

  • Form of regularization used to avoid overfitting when training a learner with an iterative method, such as gradient descent.
  • Such methods update the learner so as to make it better fit the training data with each iteration. Up to a point, this improves the learner’s performance on validation set. Past that point, however, improving the learner’s fit to the training data comes at the expense of increased generalization error.
  • It provides guidance as to how many iterations can be run before the learner begins to over-fit.
  • Is implemented using one data set for training, one statistically independent data set for validation and another for testing. The model is trained until performance on the validation set no longer improves and then applied to the test set.

For Deep learning