Why do we regularise deep learning models?
Because models love to memorise.
Overfitting happens when a model performs great on training data but poorly on new data.
Regularisation prevents this by:
• Dropout
• Weight decay (L2)
• Data augmentation
• Early stopping
Regularisation teaches the model to generalise, not memorise. 🧠✨
#MachineLearning #DeepLearning #Regularisation