End-of-lesson Quiz

5 questions · What is Machine Learning?

0/5 answered
1 of 5
What does a machine learning model actually 'see' when it looks at an 8×8 digit image?
The model never sees an image — it sees a flat array of 64 numbers (the pixel intensities). The 8×8 grid is just for humans. This is why feature engineering matters: presenting data so the patterns are easy for the algorithm to find.
2 of 5
Your malware detector reports 99% accuracy on a dataset where 1% of files are malicious. Why is this almost certainly worthless?
This is the accuracy trap. With 1% positives, predicting 'benign' for everything yields 99% accuracy and 0% recall. Always check recall when classes are imbalanced — what fraction of real threats did you actually catch?
3 of 5
In the digits dataset, the corner pixels are almost always zero. What does this tell you about them as features?
Features with zero variance never change between samples, so they cannot help the model distinguish classes. Dropping them is a basic form of feature selection — fewer features means faster training and less noise.
4 of 5
When you average all the '3' digits and all the '8' digits and look at the prototypes side by side, they look very similar. What does this predict about the model?
When prototypes overlap visually, the decision boundary between those classes is thin and error-prone. You will see this exact pattern as confused cells in the model's confusion matrix — 3s mistaken for 8s and vice versa.
5 of 5
You want to know which features matter most for predicting digit class. Which technique gives you the answer fastest?
Feature–target correlation is a quick, model-free way to rank features by signal. High correlation means the pixel value tracks the label (predictive); low correlation means it is noise. In security ML this is how you decide which fields in a log are worth feeding to the model.

Loading...