Step 6: Overfitting

When the model memorises instead of learns

1 ExplorePlay below
2 ReadUnderstand
3 BuildHands-on lab
4 CompareSolution
💡 ReflectThink deeper

Overfitting: train vs validation

Select an architecture and watch how training accuracy and validation accuracy diverge. The gap between them is the overfitting signal.

Accuracy curves

Overfit gap (train - val)

How to fight overfitting

TechniqueHow it worksWhen to use
Early stoppingStop training when val accuracy peaksAlways — cheapest fix
DropoutRandomly zero out neurons during trainingLarge networks
Reduce capacityFewer neurons/layersWhen params >> samples
More dataCollect more training examplesBest long-term solution
Loading...
Loading...
Loading...

Think Deeper

At what epoch does the validation accuracy peak for the medium network? What should you do about it?

Check the chart — val accuracy typically peaks around epoch 20-30 then plateaus or drops. You should stop training at the peak. This is called early stopping — a key regularisation technique.
Cybersecurity tie-in: An overfit intrusion detector memorises specific attack patterns from training instead of learning general attack behaviour. It catches known attacks perfectly but misses new variants. Early stopping and validation are essential in security ML.

Loading...