This website requires JavaScript.

Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attacks

Xingxing WeiYing GuoJie YuBo Zhang
Dec 2022
摘要
Adversarial patch is an important form of real-world adversarial attack thatbrings serious risks to the robustness of deep neural networks. Previousmethods generate adversarial patches by either optimizing their perturbationvalues while fixing the pasting position or manipulating the position whilefixing the patch's content. This reveals that the positions and perturbationsare both important to the adversarial attack. For that, in this paper, wepropose a novel method to simultaneously optimize the position and perturbationfor an adversarial patch, and thus obtain a high attack success rate in theblack-box setting. Technically, we regard the patch's position, thepre-designed hyper-parameters to determine the patch's perturbations as thevariables, and utilize the reinforcement learning framework to simultaneouslysolve for the optimal solution based on the rewards obtained from the targetmodel with a small number of queries. Extensive experiments are conducted onthe Face Recognition (FR) task, and results on four representative FR modelsshow that our method can significantly improve the attack success rate andquery efficiency. Besides, experiments on the commercial FR service andphysical environments confirm its practical application value. We also extendour method to the traffic sign recognition task to verify its generalizationability.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?