This website requires JavaScript.

Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals

Greta WarrenMark T. KeaneChristophe GueretEoin Delaney
Mar 2023
摘要
Counterfactual explanations are an increasingly popular form of post hocexplanation due to their (i) applicability across problem domains, (ii)proposed legal compliance (e.g., with GDPR), and (iii) reliance on thecontrastive nature of human explanation. Although counterfactual explanationsare normally used to explain individual predictive-instances, we explore anovel use case in which groups of similar instances are explained in acollective fashion using ``group counterfactuals'' (e.g., to highlight arepeating pattern of illness in a group of patients). These groupcounterfactuals meet a human preference for coherent, broad explanationscovering multiple events/instances. A novel, group-counterfactual algorithm isproposed to generate high-coverage explanations that are faithful to theto-be-explained model. This explanation strategy is also evaluated in a large,controlled user study (N=207), using objective (i.e., accuracy) and subjective(i.e., confidence, explanation satisfaction, and trust) psychological measures.The results show that group counterfactuals elicit modest but definiteimprovements in people's understanding of an AI system. The implications ofthese findings for counterfactual methods and for XAI are discussed.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答