AI-driven decision support systems and epistemic reliance: a qualitative study on obstetricians’ and midwives’ perspectives on integrating AI-driven CTG into clinical decision making

BMC Medical Ethics 25 (1):1-11 (2024)
  Copy   BIBTEX

Abstract

Background Given that AI-driven decision support systems (AI-DSS) are intended to assist in medical decision making, it is essential that clinicians are willing to incorporate AI-DSS into their practice. This study takes as a case study the use of AI-driven cardiotography (CTG), a type of AI-DSS, in the context of intrapartum care. Focusing on the perspectives of obstetricians and midwives regarding the ethical and trust-related issues of incorporating AI-driven tools in their practice, this paper explores the conditions that AI-driven CTG must fulfill for clinicians to feel justified in incorporating this assistive technology into their decision-making processes regarding interventions in labor. Methods This study is based on semi-structured interviews conducted online with eight obstetricians and five midwives based in England. Participants were asked about their current decision-making processes about when to intervene in labor, how AI-driven CTG might enhance or disrupt this process, and what it would take for them to trust this kind of technology. Interviews were transcribed verbatim and analyzed with thematic analysis. NVivo software was used to organize thematic codes that recurred in interviews to identify the issues that mattered most to participants. Topics and themes that were repeated across interviews were identified to form the basis of the analysis and conclusions of this paper. Results There were four major themes that emerged from our interviews with obstetricians and midwives regarding the conditions that AI-driven CTG must fulfill: (1) the importance of accurate and efficient risk assessments; (2) the capacity for personalization and individualized medicine; (3) the lack of significance regarding the type of institution that develops technology; and (4) the need for transparency in the development process. Conclusions Accuracy, efficiency, personalization abilities, transparency, and clear evidence that it can improve outcomes are conditions that clinicians deem necessary for AI-DSS to meet in order to be considered reliable and therefore worthy of being incorporated into the decision-making process. Importantly, healthcare professionals considered themselves as the epistemic authorities in the clinical context and the bearers of responsibility for delivering appropriate care. Therefore, what mattered to them was being able to evaluate the reliability of AI-DSS on their own terms, and have confidence in implementing them in their practice.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,283

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Towards a Design Science of Ethical Decision Support.Kieran Mathieson - 2007 - Journal of Business Ethics 76 (3):269-292.
Shared decision-making and patient autonomy.Lars Sandman & Christian Munthe - 2009 - Theoretical Medicine and Bioethics 30 (4):289-310.
Usage Degree of the Capabilities of DSS in Al-Aqsa University of Gaza.Mazen J. Al-Shobaki & Samy S. Abu-Naser - 2017 - International Journal of Engineering and Information Systems (IJEAIS) 1 (2):33-47.
Clinical judgment and bioethics: The decision making link.Richard A. Wright - 1991 - Journal of Medicine and Philosophy 16 (1):71-91.

Analytics

Added to PP
2024-01-08

Downloads
17 (#873,341)

6 months
17 (#151,974)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations