End-of-lesson Quiz

5 questions · Decision Trees

0/5 answered
1 of 5
What does a decision tree actually learn during training?
A trained tree is a hierarchy of if/else rules like 'if connection_rate > 50.5 then go left, else go right'. Each split is chosen to maximise information gain. That's it — no weights, no gradients.
2 of 5
A node has 60 benign and 40 attack samples. What is its Gini impurity?
Gini = 1 − (0.6² + 0.4²) = 1 − 0.52 = 0.48. Pure nodes (all one class) have Gini = 0; the worst case is 0.5 for a 50/50 split. The tree picks splits that lower the weighted Gini of the children.
3 of 5
Why are decision trees particularly valued in security ML compared to neural networks?
Decision trees are interpretable. You can export the rule set and tell an analyst exactly why a connection was flagged. Try doing that with a 100-million-parameter neural network. Interpretability builds trust with auditors and incident responders.
4 of 5
Your tree has 100% training accuracy but 73% test accuracy. What's happening?
A 27-point train/test gap is a textbook overfitting signal. With unlimited depth, the tree creates a leaf for almost every training sample. The fix: limit max_depth or use min_samples_leaf to force the tree to generalise.
5 of 5
You plot training and test accuracy as you increase max_depth from 1 to 20. What pattern indicates the right depth to pick?
Watch the train-test gap. While both curves rise together, you're learning real patterns. The moment they diverge (train keeps climbing, test plateaus or drops) you're starting to overfit. Pick the depth just before the divergence.

Loading...