This website requires JavaScript.

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Vinitra SwamySijia DuMirko MarrasTanja K\"aser
Dec 2022
摘要
Deep learning models for learning analytics have become increasingly popularover the last few years; however, these approaches are still not widely adoptedin real-world settings, likely due to a lack of trust and transparency. In thispaper, we tackle this issue by implementing explainable AI methods forblack-box neural networks. This work focuses on the context of online andblended learning and the use case of student success prediction models. We usea pairwise study design, enabling us to investigate controlled differencesbetween pairs of courses. Our analyses cover five course pairs that differ inone educationally relevant aspect and two popular instance-based explainable AImethods (LIME and SHAP). We quantitatively compare the distances between theexplanations across courses and methods. We then validate the explanations ofLIME and SHAP with 26 semi-structured interviews of university-level educatorsregarding which features they believe contribute most to student success, whichexplanations they trust most, and how they could transform these insights intoactionable course design decisions. Our results show that quantitatively,explainers significantly disagree with each other about what is important, andqualitatively, experts themselves do not agree on which explanations are mosttrustworthy. All code, extended results, and the interview protocol areprovided at https://github.com/epfl-ml4ed/trusting-explainers.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答