The app for independent voices

Why do we regularise deep learning models?

Because models love to memorise.

Overfitting happens when a model performs great on training data but poorly on new data.

Regularisation prevents this by:

• Dropout

• Weight decay (L2)

• Data augmentation

• Early stopping

Regularisation teaches the model to generalise, not memorise. 🧠✨

#MachineLearning #DeepLearning #Regularisation

Dec 3
at
9:53 AM

Log in or sign up

Join the most interesting and insightful discussions.