11 October 2023 | 14h00 - 15h30 (Lisbon Time Zone) | Online
#Seminar 5: In 1995, Ian Hacking published his influential paper on the looping effects of human kinds. In this, Hacking argued that, unlike animals, humans have the unique ability to engage with any characterization of themselves as a particular “kind” and thereby change the nature of the kind, making human kinds a dynamic phenomenon. In previous work, I have argued that in a medical context this looping effect can be a primarily epistemic phenomenon, as diagnostic profiles change when more is learned about a particular medical condition. In this talk, I will explore the implications of this idea if A.I. is implemented as a diagnostic tool. It is possible that A.I. could lead to epistemic and diagnostic looping effects if it is used to assist diagnosis. This may have a beneficial effect for patients, but there are also some possible risks that I will identify during this talk. Using A.I. for diagnosis and research may allow for clinicians to build a broader and more dynamic diagnostic profile, which might result in increased or earlier diagnosis for certain medical conditions. If machine learning could be deployed as a diagnostic and research tool at the same time, this might result in a broader understanding of the full symptom profile at the point of diagnosis. However, there are some potential risks to use of diagnostic A.I. It is possible that medical A.I. might increase, rather than decrease, diagnostic barriers. There is potential for looping effects in medical A.I. to exacerbate a pre-existing effect where atypical presentation of some medical conditions might be a barrier to accurate and timely diagnosis. This might create a different kind of looping effect, where diagnosis reinforces typical presentation as the only diagnostic profile for medical conditions. There are also concerns about the risks that confounding factors may be more problematic if clinician judgement is circumvented. Factors such as co-morbid conditions or alternative explanations already present a problem for doctors hoping to causally attribute a symptom to a disease, and the use of medical A.I. might serve to help with this problem or exacerbate it. I answer these concerns by arguing that the goal of artificial intelligence in the medical context ought to be learning rather than automation. With an epistemic goal at the heart of our deployment of A.I. we may be able to build a diagnostic tool that will allow for both research into disease presentation, and faster and more accurate diagnosis.
Short bio: Georgina H. Mills is a PhD researcher at Tilburg University working in Philosophy of Science. She has also written about philosophy of emotions, philosophy of medicine, and popular culture. Her PhD research focuses on personality as a social and scientific concept but her background is primarily bioethics and philosophy of medicine, and she continues to work on those subjects outside of the PhD.
rTAIM Seminars: https://ifilosofia.up.pt/activities/rtaim-seminars
https://trustaimedicine.weebly.com/rtaim-seminars.html
Organisation:
Steven S. Gouveia (MLAG/IF)
Mind, Language and Action Group (MLAG)
Instituto de Filosofia da Universidade do Porto – FIL/00502
Fundação para a Ciência e a Tecnologia (FCT)