6 December 2023 | 14h30 - 16h00 (Lisbon Time Zone) | Online
#Seminar 7: The relationship between a clinician and a patient is built on trust, which ultimately rests on the transparency of medical methods. However, as neural networks (NNs) are employed in medicine, they prove problematic because NNs are opaque black-box systems. Thus, an explanatory gap is generated, undermining established trust between clinicians and patients. In this paper, I argue that to cross this gap, what is necessary is not only explainable AI (XAI) but also explanatory AI. A problem with current XAI methods is that they can, at best, provide causal How-explanations which only generate observations that do not get us to the underlying cause of an event. What is required is to understand the process of Why-explanations, which often involve counterfactual reasoning based on more basic observations. To do that, engaging with the clinical practice itself is necessary to formulate a standard for explanations. I will argue that when diagnosing, doctors often engage in selective abductive reasoning, where they hypothesise what might be the case depending on the available evidence. Therefore, a hypothesis-driven model of explanatory AI is required, which could be used to design an AI interpreter meant to provide more illuminating explanations of AI behaviour. I further argue that we must exploit the current rising transformer architecture paradigm. The reason is that transformer NNs have proven to be capable of the necessary abductive reasoning, alongside multi-modality and context-sensitivity, useful for human interpretable explanations.
Short Bio: Jaroslav Malík is a PhD candidate from the University of Hradec Králové. His philosophical work explores the intersection between philosophy of mind, philosophy of technology and philosophical anthropology. He is currently concentrating on topics related to the ethics of AI, ranging from XAI to the alignment problem.
rTAIM Seminars: https://ifilosofia.up.pt/activities/rtaim-seminars
https://trustaimedicine.weebly.com/rtaim-seminars.html
Organisation:
Steven S. Gouveia (MLAG/IF)
Mind, Language and Action Group (MLAG)
Instituto de Filosofia da Universidade do Porto – FIL/00502
Fundação para a Ciência e a Tecnologia (FCT)