Adversarial Attack
0 订阅
An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes. Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks , https://arxiv.org/abs/2002.05388
相关学科: Adversarial DefenseAdversarial RobustnessNode ClassificationImage ClassificationMalware DetectionAdversarial Attack DetectionAdversarial TextFaster R-CNNSelf-Driving CarsInception-v3
学科讨论

暂无讨论内容,你可以
推荐文献
按被引用数
学科管理组
暂无学科课代表,你可以申请成为课代表
重要学者
Geoffrey E. Hinton
345738 被引用,408
篇论文
Yi Chen
267689 被引用,4684
篇论文
Michael I. Jordan
150356 被引用,1056
篇论文
Anil K. Jain
148144 被引用,1055
篇论文
Xiang Zhang
138753 被引用,2111
篇论文
Harlan M. Krumholz
137826 被引用,2347
篇论文
Trevor Darrell
121211 被引用,688
篇论文
Bernhard Schölkopf
117502 被引用,1231
篇论文
Ian Goodfellow
92697 被引用,135
篇论文
Stanley Osher
84093 被引用,532
篇论文