
Explainable AI
XAI in Autonomous Driving
- The goal of explainable AI (XAI) is to make the perception and decisions of the autonomous vehicle visible and understandable to the passenger.
- Otherwise, passengers would entrust their lives to a “black box”, not knowing what is going on or if the autonomous vehicle is, for example, perceiving and reacting to driving situations correctly.
- Different feedback modalities can be used to design for AI transparency in autonomous driving - see Feedback Modality Cards.
XAI Definition
- XAI is about finding ways to explain why an AI algorithm predicts a particular outcome [1, 2].
- XAI supports the evaluation and enhancement of AI models for AI developers. Whereas clarifying and enhancing the interaction with an AI system benefits end users.
- Individual users have specific requirements and preferences for explanations under various circumstances, which a system should take into consideration [3].
- Furthermore, numerous design guidelines for AI systems strongly advise transparency [5,6]. For instance, the recommendations from the European Commission highlight that explanations from AI systems should address stakeholder concerns [5]. This transparency in the interaction between humans and AI is supported in part by XAI [4].
Sources
[1] - Adadi, Amina, and Mohammed Berrada. "Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)." IEEE access 6 (2018): 52138-52160.
[2] - Došilović, Filip Karlo, Mario Brčić, and Nikica Hlupić. "Explainable artificial intelligence: A survey." 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO). IEEE, 2018.
[3] - Miller, Tim. "Explanation in artificial intelligence: Insights from the social sciences." Artificial intelligence 267 (2019): 1-38.
[4] - Hois, Joana, Dimitra Theofanou-Fuelbier, and Alischa Janine Junk. "How to achieve explainability and transparency in human ai interaction." International Conference on Human-Computer Interaction. Springer, Cham, 2019.
[5] - Ala-Pietilä, P., et al. Ethics guidelines for trustworthy AI. Technical report, European Commission–AI HLEG, B-1049 Brussels, 2019.
[6] - Kazim, Emre, and Adriano Koshiyama. "Explaining decisions made with AI: a review of the co-badged guidance by the ICO and the Turing Institute." Available at SSRN 3656269 (2020).