The most common types of regularization are L1 and L2. We change the overall cost function by adding another term called regularization. The values of weight matrices decrease due to the addition of this regularization because it assumes that a neural network with smaller weight matrices leads to simpler models.
Regularization is different in L1 and L2. The formula for L1 regularization is as follows:
In the preceding formula, regularization is represented by lambda (λ). Here, we penalize the absolute weight.
The formula for L2 regularization is as follows:
In the preceding formula, L2 regularization is represented by lambda (λ). It is also called weight decay as it forces the weights to decay close to 0.