This website requires JavaScript.

Frequency Regularization for Improving Adversarial Robustness

Binxiao HuangChaofan TaoRui LinNgai Wong
Dec 2022
摘要
Deep neural networks are incredibly vulnerable to crafted,human-imperceptible adversarial perturbations. Although adversarial training(AT) has proven to be an effective defense approach, we find that theAT-trained models heavily rely on the input low-frequency content for judgment,accounting for the low standard accuracy. To close the large gap between thestandard and robust accuracies during AT, we investigate the frequencydifference between clean and adversarial inputs, and propose a frequencyregularization (FR) to align the output difference in the spectral domain.Besides, we find Stochastic Weight Averaging (SWA), by smoothing the kernelsover epochs, further improves the robustness. Among various defense schemes,our method achieves the strongest robustness against attacks by PGD-20, C\&Wand Autoattack, on a WideResNet trained on CIFAR-10 without any extra data.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?