Dropout (neural networks)
Jump to navigation
Jump to search
Dropout is a regularization technique patented by Google [1] for reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very efficient way of performing model averaging with neural networks.[2] The term "dropout" refers to dropping out units (both hidden and visible) in a neural network.[3][4]
See also[edit]
References[edit]
- ^ [1], "System and method for addressing overfitting in a neural network"
- ^ Hinton, Geoffrey E.; Srivastava, Nitish; Krizhevsky, Alex; Sutskever, Ilya; Salakhutdinov, Ruslan R. (2012). "Improving neural networks by preventing co-adaptation of feature detectors". arXiv:1207.0580 [cs.NE].
- ^ "Dropout: A Simple Way to Prevent Neural Networks from Overfitting". Jmlr.org. Retrieved July 26, 2015.
- ^ Warde-Farley, David; Goodfellow, Ian J.; Courville, Aaron; Bengio, Yoshua (2013-12-20). "An empirical analysis of dropout in piecewise linear networks". arXiv:1312.6197 [stat.ML].
This artificial intelligence-related article is a stub. You can help Wikipedia by expanding it. |