This website requires JavaScript.

Knowledge Distillation from A Stronger Teacher

Tao HuangShan YouFei WangChen QianChang Xu
May 2022
摘要
Unlike existing knowledge distillation methods focus on the baselinesettings, where the teacher models and training strategies are not that strongand competing as state-of-the-art approaches, this paper presents a methoddubbed DIST to distill better from a stronger teacher. We empirically find thatthe discrepancy of predictions between the student and a stronger teacher maytend to be fairly severer. As a result, the exact match of predictions in KLdivergence would disturb the training and make existing methods perform poorly.In this paper, we show that simply preserving the relations between thepredictions of teacher and student would suffice, and propose acorrelation-based loss to capture the intrinsic inter-class relations from theteacher explicitly. Besides, considering that different instances havedifferent semantic similarities to each class, we also extend this relationalmatch to the intra-class level. Our method is simple yet practical, andextensive experiments demonstrate that it adapts well to various architectures,model sizes and training strategies, and can achieve state-of-the-artperformance consistently on image classification, object detection, andsemantic segmentation tasks. Code is available at:https://github.com/hunto/DIST_KD .
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答