Knowledge Distillation

A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel. Source: Distilling the Knowledge in a Neural Network
相关学科: Model CompressionBERTNetwork PruningLabel SmoothingNASNeural Network CompressionClass-incremental LearningResNetContinual LearningFederated Learning

学科讨论

讨论Icon

暂无讨论内容,你可以

推荐文献

按被引用数

学科管理组

暂无学科课代表,你可以申请成为课代表

重要学者

Yoshua Bengio

429868 被引用,1063 篇论文

Geoffrey E. Hinton

345738 被引用,408 篇论文

Christopher D. Manning

123173 被引用,515 篇论文

Ruslan Salakhutdinov

89393 被引用,413 篇论文

Richard Socher

81897 被引用,249 篇论文

Thomas S. Huang

80905 被引用,1385 篇论文

Philip S. Yu

79752 被引用,1712 篇论文

Xiaoou Tang

72012 被引用,489 篇论文

Jian-Guo Bian

70968 被引用,1494 篇论文

Ming-Hsuan Yang

61951 被引用,641 篇论文