This website requires JavaScript.

It is not "accuracy vs. explainability" -- we need both for trustworthy AI systems

D. Petkovic
Dec 2022
We are witnessing the emergence of an AI economy and society where AItechnologies are increasingly impacting health care, business, transportationand many aspects of everyday life. Many successes have been reported where AIsystems even surpassed the accuracy of human experts. However, AI systems mayproduce errors, can exhibit bias, may be sensitive to noise in the data, andoften lack technical and judicial transparency resulting in reduction in trustand challenges in their adoption. These recent shortcomings and concerns havebeen documented in scientific but also in general press such as accidents withself driving cars, biases in healthcare, hiring and face recognition systemsfor people of color, seemingly correct medical decisions later found to be madedue to wrong reasons etc. This resulted in emergence of many government andregulatory initiatives requiring trustworthy and ethical AI to provide accuracyand robustness, some form of explainability, human control and oversight,elimination of bias, judicial transparency and safety. The challenges indelivery of trustworthy AI systems motivated intense research on explainable AIsystems (XAI). Aim of XAI is to provide human understandable information of howAI systems make their decisions. In this paper we first briefly summarizecurrent XAI work and then challenge the recent arguments of accuracy vs.explainability for being mutually exclusive and being focused only on deeplearning. We then present our recommendations for the use of XAI in fulllifecycle of high stakes trustworthy AI systems delivery, e.g. development,validation and certification, and trustworthy production and maintenance.