This website requires JavaScript.

Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training

Shunsuke KitadaHitoshi Iyatomi
Apr 2021
摘要
Adversarial training (AT) for attention mechanisms has successfully reducedsuch drawbacks by considering adversarial perturbations. However, thistechnique requires label information, and thus, its use is limited tosupervised settings. In this study, we explore the concept of incorporatingvirtual AT (VAT) into the attention mechanisms, by which adversarialperturbations can be computed even from unlabeled data. To realize thisapproach, we propose two general training techniques, namely VAT for attentionmechanisms (Attention VAT) and "interpretable" VAT for attention mechanisms(Attention iVAT), which extend AT for attention mechanisms to a semi-supervisedsetting. In particular, Attention iVAT focuses on the differences in attention;thus, it can efficiently learn clearer attention and improve modelinterpretability, even with unlabeled data. Empirical experiments based on sixpublic datasets revealed that our techniques provide better predictionperformance than conventional AT-based as well as VAT-based techniques, andstronger agreement with evidence that is provided by humans in detectingimportant words in sentences. Moreover, our proposal offers these advantageswithout needing to add the careful selection of unlabeled data. That is, evenif the model using our VAT-based technique is trained on unlabeled data from asource other than the target task, both the prediction performance and modelinterpretability can be improved.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答