Local Interpretable Model-Agnostic Explanations (LIME)
0 订阅
LIME, or Local Interpretable Model-Agnostic Explanations, is an algorithm that can explain the predictions of any classifier or regressor in a faithful way, by approximating it locally with an interpretable model. It modifies a single data sample by tweaking the feature values and observes the resulting impact on the output. It performs the role of an 'explainer' to explain predictions from each data sample. The output of LIME is a set of explanations representing the contribution of each feature to a prediction for a single sample, which is a form of local interpretability.Interpretable models in LIME can be, for instance, linear regression or decision trees, which are trained on small perturbations (e.g. adding noise, removing words, hiding parts of the image) of the original model to provide a good local approximation.
相关学科: SHAPExplainable Artificial IntelligenceFeature ImportanceInterpretabilityInterpretable Machine LearningLow-Light Image EnhancementCounterfactual ExplanationSuperpixelsLink DiscoveryText Classification
学科讨论

暂无讨论内容,你可以
推荐文献
按被引用数
学科管理组
暂无学科课代表,你可以申请成为课代表
重要学者
Ben Zhong Tang
173765 被引用,2291
篇论文
Hyun-Chul Kim
172231 被引用,4513
篇论文
Christian Szegedy
102646 被引用,57
篇论文
Jimmy Ba
83691 被引用,93
篇论文
Yu Huang
83395 被引用,1548
篇论文
Philip S. Yu
79752 被引用,1712
篇论文
Michael Randolph Garey
72916 被引用,105
篇论文
Ted Belytschko
72264 被引用,608
篇论文
David S. Johnson
68809 被引用,193
篇论文
Fei Wang
50622 被引用,1973
篇论文