Interpretable Machine Learning

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models. Source: [Assessing the Local Interpretability of Machine Learning Models ](https://arxiv.org/abs/1902.03501)
相关学科: SHAPFeature ImportanceInterpretabilityExplainable Artificial IntelligenceLIMECounterfactual ExplanationCounterfactual ReasoningKnowledge Graph EmbeddingsData PoisoningAdditive models

学科讨论

讨论Icon

暂无讨论内容,你可以

推荐文献

按被引用数

学科管理组

暂无学科课代表,你可以申请成为课代表

重要学者

Geoffrey E. Hinton

345738 被引用,408 篇论文

Chris Sander

204271 被引用,778 篇论文

Klaus-Robert Müller

83524 被引用,798 篇论文

Alessandro Vespignani

67484 被引用,490 篇论文

David B. Matchar

54326 被引用,600 篇论文

Stefano de Gironcoli

46703 被引用,172 篇论文

Carlos Guestrin

40752 被引用,225 篇论文

Janis M. Taube

40088 被引用,240 篇论文

Mani Srivastava

37034 被引用,623 篇论文

Charu C. Aggarwal

34679 被引用,577 篇论文