This website requires JavaScript.

Publishing Efficient On-device Models Increases Adversarial Vulnerability

Sanghyun HongNicholas CarliniAlexey Kurakin
Dec 2022
摘要
Recent increases in the computational demands of deep neural networks (DNNs)have sparked interest in efficient deep learning mechanisms, e.g., quantizationor pruning. These mechanisms enable the construction of a small, efficientversion of commercial-scale models with comparable accuracy, accelerating theirdeployment to resource-constrained devices. In this paper, we study the security considerations of publishing on-devicevariants of large-scale models. We first show that an adversary can exploiton-device models to make attacking the large models easier. In evaluationsacross 19 DNNs, by exploiting the published on-device models as a transferprior, the adversarial vulnerability of the original commercial-scale modelsincreases by up to 100x. We then show that the vulnerability increases as thesimilarity between a full-scale and its efficient model increase. Based on theinsights, we propose a defense, $similarity$-$unpairing$, that fine-tuneson-device models with the objective of reducing the similarity. We evaluatedour defense on all the 19 DNNs and found that it reduces the transferability upto 90% and the number of queries required by a factor of 10-100x. Our resultssuggest that further research is needed on the security (or even privacy)threats caused by publishing those efficient siblings.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答