Interpretable by Design: Wrapper Boxes Combine Neural Performance with Faithful Explanations
Yiheng SuJuni Jessy LiMatthew Lease
Yiheng SuJuni Jessy LiMatthew Lease
Nov 2023
0被引用
0笔记
摘要原文
Can we preserve the accuracy of neural models while also providing faithful explanations? We present wrapper boxes, a general approach to generate faithful, example-based explanations for model predictions while maintaining predictive performance. After training a neural model as usual, its learned feature representation is input to a classic, interpretable model to perform the actual prediction. This simple strategy is surprisingly effective, with results largely comparable to those of the original neural model, as shown across three large pre-trained language models, two datasets of varying scale, four classic models, and four evaluation metrics. Moreover, because these classic models are interpretable by design, the subset of training examples that determine classic model predictions can be shown directly to users.