End-of-lesson Quiz

5 questions · Convolutional Neural Networks

0/5 answered
1 of 5
Why does a Dense (fully-connected) network struggle with images compared to a CNN?
Flattening a 28×28 image gives 784 numbers with no preserved geometry. The Dense layer treats them as 784 independent features. CNNs use small filters that look at local pixel neighbourhoods, learning edges, curves, and textures that depend on adjacency.
2 of 5
If you randomly shuffle every pixel in an MNIST image, Dense accuracy barely changes. What does that prove?
If shuffling pixels doesn't break a model, that model never used spatial structure to begin with. CNNs would collapse on shuffled inputs because their 3×3 filters need adjacent pixels to detect edges. This is why CNNs win on images.
3 of 5
A Conv2D(32, (3,3)) layer on a 28×28×1 input has only ~320 parameters, while a Dense(676) layer on the same input has ~530,000. Why so few?
A CNN learns one small filter and slides it across the entire image. This weight sharing is why CNNs have far fewer parameters than Dense networks — and why they generalise so well to images: a 'curve detector' is useful no matter where in the image the curve appears.
4 of 5
A malware analyst asks: 'Why not just use file hashes?' What's the fundamental advantage of CNN-based malware visualisation?
Malware authors evade hash detection by recompiling, packing, or flipping a single byte. A CNN trained on byte-as-image representations learns the visual signature of a malware family — code section layouts, data patterns — which survives minor mutations.
5 of 5
What does a MaxPooling layer do, and why is it useful?
MaxPooling shrinks the feature maps (e.g. 26×26 → 13×13 with 2×2 pooling), keeping only the strongest activation per window. This cuts computation, gives the network a small amount of translation invariance, and reduces the parameter count of subsequent layers.

Loading...