This website requires JavaScript.

Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning

Dan LiuXue Liu
Dec 2022
摘要
Most existing pruning works are resource-intensive, requiring retraining orfine-tuning of the pruned models for accuracy. We propose a retraining-freepruning method based on hyperspherical learning and loss penalty terms. Theproposed loss penalty term pushes some of the model weights far from zero,while the rest weight values are pushed near zero and can be safely pruned withno need for retraining and a negligible accuracy drop. In addition, ourproposed method can instantly recover the accuracy of a pruned model byreplacing the pruned values with their mean value. Our method obtainsstate-of-the-art results in retraining-free pruning and is evaluated onResNet-18/50 and MobileNetV2 with ImageNet dataset. One can easily get a 50\%pruned ResNet18 model with a 0.47\% accuracy drop. With fine-tuning, theexperiment results show that our method can significantly boost the accuracy ofthe pruned models compared with existing works. For example, the accuracy of a70\% pruned (except the first convolutional layer) MobileNetV2 model only drops3.5\%, much less than the 7\% $\sim$ 10\% accuracy drop with conventionalmethods.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答