This website requires JavaScript.

Knowledge Distillation for Adaptive MRI Prostate Segmentation Based on Limit-Trained Multi-Teacher Models

Eddardaa Ben LoussaiefHatem RashwanMohammed AyadMohammed Zakaria HassanDomenec Puig
Mar 2023
摘要
With numerous medical tasks, the performance of deep models has recentlyexperienced considerable improvements. These models are often adept learners.Yet, their intricate architectural design and high computational complexitymake deploying them in clinical settings challenging, particularly with deviceswith limited resources. To deal with this issue, Knowledge Distillation (KD)has been proposed as a compression method and an acceleration technology. KD isan efficient learning strategy that can transfer knowledge from a burdensomemodel (i.e., teacher model) to a lightweight model (i.e., student model). Hencewe can obtain a compact model with low parameters with preserving the teacher'sperformance. Therefore, we develop a KD-based deep model for prostate MRIsegmentation in this work by combining features-based distillation withKullback-Leibler divergence, Lovasz, and Dice losses. We further demonstrateits effectiveness by applying two compression procedures: 1) distillingknowledge to a student model from a single well-trained teacher, and 2) sincemost of the medical applications have a small dataset, we train multipleteachers that each one trained with a small set of images to learn an adaptivestudent model as close to the teachers as possible considering the desiredaccuracy and fast inference time. Extensive experiments were conducted on apublic multi-site prostate tumor dataset, showing that the proposed adaptationKD strategy improves the dice similarity score by 9%, outperforming all testedwell-established baseline models.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答