End-of-lesson Quiz
5 questions · Deeper Networks & Dropout
1
of 5
A neural network has 134,000 parameters trained on 1,600 samples. Why is this dangerous for a security model?
When parameters >> samples, the network has so much capacity it memorises individual examples instead of learning the underlying patterns (e.g. 'high connection rate + many failed logins = brute force'). It looks great in training, fails on novel attacks. This is exactly when you need regularisation.
2
of 5
How does Dropout actually regularise a network during training?
Dropout randomly silences (e.g.) 50% of neurons each training step. The network can't rely on any specific neuron because it might be dropped on the next batch, so it learns redundant, distributed representations. Result: better generalisation.
3
of 5
You deploy a model with
Dropout(0.5). During inference on live traffic, are neurons still being dropped?
Dropout is a training-only regulariser. At inference, all neurons are active and their outputs are scaled to compensate. Keras handles this via the
training flag automatically. If neurons were still dropped during inference, predictions would be random and inconsistent — unacceptable for a SOC alerting pipeline.
4
of 5
Without Batch Normalisation, layer 2's input distribution shifts during training (mean=0.5 at epoch 1, mean=2.1 at epoch 10). What's the practical cost?
When upstream layers change their output distribution, downstream layers chase a moving target. BatchNorm renormalises each layer's inputs to mean 0 / variance 1, fixing the distribution and allowing higher learning rates and faster, more stable training.
5
of 5
You set
patience=5 on early stopping. Best validation loss was at epoch 25; training stopped at epoch 30. With restore_best_weights=True, which epoch's weights does the deployed model use?restore_best_weights=True rolls the model back to the epoch with the best validation metric — epoch 25 here. Without it, you'd ship the epoch 30 weights, which are worse than what you had 5 epochs ago. Always pair early stopping with restore_best_weights.