13 September 2023 | 10h00 - 11h30 (Lisbon Time Zone) | Online
#Seminar 4: Research on healthcare applications of machine learning (ML), a type of artificial intelligence (AI), has proliferated across clinical processes such as diagnosis and screening of diseases, allocation of healthcare resources, and developing personalised treatments. Given the increasingly complex processes behind ML systems, explainability has been considered a major caveat to its adoption in healthcare. This presentation reports the preliminary findings of a qualitative investigation of the perspectives of professional stakeholders (e.g. clinicians, data scientists, entrepreneurs and regulators) working on ML algorithms in diagnosis and screening. All participants were unified on the qualities that diagnosis should have: diagnosis should proceed in a way that enabled human oversight, promote critical thinking among clinicians, and ensure patient safety. However participants were divided on whether explanation was an important means to achieve this end. Broadly, some participants proposed ‘Outcome-assured’ diagnostic practices, while others proposed ‘Explanation-assured’ diagnostic practices, a distinction that applied either with or without the use of AI. ‘Outcome assured’ and ‘Explanation assured’ approaches differed in the significance attributed to explanation in part because they conceptualised explanation differently, not just in relation to what explanation is, but also in relation to the level of explanation and who might be owed an explanation.
Short bio: Dr Yves Saint James Aquino is a postdoctoral research fellow at the Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health and Society, University of Wollongong (Australia). His research interests include philosophy of medicine, bioethics and ethics of artificial intelligence. Twitter @yvessj_aquino
rTAIM Seminars: https://ifilosofia.up.pt/activities/rtaim-seminars
https://trustaimedicine.weebly.com/rtaim-seminars.html
Organisation:
Steven S. Gouveia (MLAG/IF)
Mind, Language and Action Group (MLAG)
Instituto de Filosofia da Universidade do Porto – FIL/00502
Fundação para a Ciência e a Tecnologia (FCT)