Picture of Yee Wei Law

Dropout

by Yee Wei Law - Tuesday, 20 June 2023, 2:35 PM
 

Deep neural networks (DNNs) employ a large number of parameters to learn complex dependencies of outputs on inputs, but overfitting often occurs as a result.

Large DNNs are also slow to converge.

The dropout method implements the intuitive idea of randomly dropping units (along with their connections) from a network during training [SHK+14].

Fig. 1: Sample effect of applying dropout to a neural network in (a). The thinned network in (b) has units marked with a cross removed [SHK+14, Figure 1].

References

[SHK+14] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting, Journal of Machine Learning Research 15 no. 56 (2014), 1929–1958. Available at http://jmlr.org/papers/v15/srivastava14a.html.

» Artificial intelligence (including machine learning which includes deep learning)