This website requires JavaScript.

It is not "accuracy vs. explainability" -- we need both for trustworthy AI systems

D. Petkovic
Dec 2022
摘要
We are witnessing the emergence of an AI economy and society where AItechnologies are increasingly impacting health care, business, transportationand many aspects of everyday life. Many successes have been reported where AIsystems even surpassed the accuracy of human experts. However, AI systems mayproduce errors, can exhibit bias, may be sensitive to noise in the data, andoften lack technical and judicial transparency resulting in reduction in trustand challenges in their adoption. These recent shortcomings and concerns havebeen documented in scientific but also in general press such as accidents withself driving cars, biases in healthcare, hiring and face recognition systemsfor people of color, seemingly correct medical decisions later found to be madedue to wrong reasons etc. This resulted in emergence of many government andregulatory initiatives requiring trustworthy and ethical AI to provide accuracyand robustness, some form of explainability, human control and oversight,elimination of bias, judicial transparency and safety. The challenges indelivery of trustworthy AI systems motivated intense research on explainable AIsystems (XAI). Aim of XAI is to provide human understandable information of howAI systems make their decisions. In this paper we first briefly summarizecurrent XAI work and then challenge the recent arguments of accuracy vs.explainability for being mutually exclusive and being focused only on deeplearning. We then present our recommendations for the use of XAI in fulllifecycle of high stakes trustworthy AI systems delivery, e.g. development,validation and certification, and trustworthy production and maintenance.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答