Evaluating Adversarial Robustness with Expected Viable Performance
Ryan McCoppinColin DawsonSean M. KennedyLeslie M. Blaha
Ryan McCoppinColin DawsonSean M. KennedyLeslie M. Blaha
Sep 2023
0被引用
0笔记
开学季活动火爆进行中,iPad、蓝牙耳机、拍立得、键盘鼠标套装等你来拿
摘要原文
We introduce a metric for evaluating the robustness of a classifier, with particular attention to adversarial perturbations, in terms of expected functionality with respect to possible adversarial perturbations. A classifier is assumed to be non-functional (that is, has a functionality of zero) with respect to a perturbation bound if a conventional measure of performance, such as classification accuracy, is less than a minimally viable threshold when the classifier is tested on examples from that perturbation bound. Defining robustness in terms of an expected value is motivated by a domain general approach to robustness quantification.