This website requires JavaScript.

Causal Explanations of Structural Causal Models

Matej Ze\v{c}evi\'cDevendra Singh DhamiConstantin A. RothkopfKristian Kersting
Oct 2021
In explanatory interactive learning (XIL) the user queries the learner, thenthe learner explains its answer to the user and finally the loop repeats. XILis attractive for two reasons, (1) the learner becomes better and (2) theuser's trust increases. For both reasons to hold, the learner's explanationsmust be useful to the user and the user must be allowed to ask usefulquestions. Ideally, both questions and explanations should be grounded in acausal model since they avoid spurious fallacies. Ultimately, we seem to seek acausal variant of XIL. The question part on the user's end we believe to besolved since the user's mental model can provide the causal model. But howwould the learner provide causal explanations? In this work we show thatexisting explanation methods are not guaranteed to be causal even when providedwith a Structural Causal Model (SCM). Specifically, we use the popular,proclaimed causal explanation method CXPlain to illustrate how the generatedexplanations leave open the question of truly causal explanations. Thus as astep towards causal XIL, we propose a solution to the lack of causalexplanations. We solve this problem by deriving from first principles anexplanation method that makes full use of a given SCM, which we refer to asSC$\textbf{E}$ ($\textbf{E}$ standing for explanation). Since SCEs make use ofstructural information, any causal graph learner can now provide human-readableexplanations. We conduct several experiments including a user study with 22participants to investigate the virtue of SCE as causal explanations of SCMs.