Dropout
0 订阅
Dropout is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$). The idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.
相关学科: Batch NormalizationData AugmentationReLUSoftmaxImputationLSTMDropConnectMonte Carlo DropoutWeight DecayImage Classification
学科讨论

暂无讨论内容,你可以
推荐文献
按被引用数
学科管理组
暂无学科课代表,你可以申请成为课代表
重要学者
Yoshua Bengio
429868 被引用,1063
篇论文
Geoffrey E. Hinton
345738 被引用,408
篇论文
Albert-László Barabási
214997 被引用,510
篇论文
Yann LeCun
175383 被引用,366
篇论文
Ilya Sutskever
165856 被引用,113
篇论文
Tien Yin Wong
164688 被引用,2182
篇论文
Nan M. Laird
157185 被引用,366
篇论文
Ross Girshick
150810 被引用,165
篇论文
Anil K. Jain
148144 被引用,1055
篇论文
Christopher D. Manning
123173 被引用,515
篇论文