Editorial

Artificial intelligence in healthcare: Opportunities come with landmines

In the ever-evolving landscape of healthcare, the convergence of artificial intelligence (AI) within breast cancer screening and the transformative potential of natural language processing (NLP) in ensuring patient safety stands as a testament to ground-breaking progress.1 2 The seamless integration of AI technologies in radiology is reshaping diagnostic precision while NLP’s capacity to decipher and enhance safety protocols heralds a new era in healthcare innovation.3 4

Contrary to common assumptions, the presence of AI does not necessarily guarantee improved efficiency or accuracy in interpreting medical images. It is concerning that AI’s identification of potential errors can paradoxically lead some radiologists to make more mistakes and spend more time analysing images, highlighting the dangers of developing AI systems in isolation.5 This underscores the crucial need for designing collaborative human-AI systems rather than standalone AI solutions, as the full extent of AI’s influence on human behaviour remains unpredictable. Moreover, there is also a critical concern regarding patient safety as a matter of health equity, shedding light on the disparities in medical errors and treatment injuries exacerbated by social determinants of care. It calls for a holistic approach to healthcare delivery that prioritises equity and inclusivity, ensuring that all patients receive the highest standard of care irrespective of their social circumstances.

The two ‘editor’s choice’ articles highlight how crucial it is to embrace AI in breast cancer screening and NLP in enhancing patient safety in healthcare’s dynamic landscape. Högberg et al6 studied an insightful exploration into the potential and challenges associated with AI in breast radiology. The Swedish breast radiologists’ perspective on AI in mammography screening revealed an overwhelmingly positive attitude towards its incorporation, highlighting the potential to enhance efficiency in diagnostic processes. However, alongside this optimism, the study uncovered a labyrinth of uncertainties and diverse viewpoints. Concerns loomed over potential risks ranging from medical outcomes to the reshaping of working conditions and crucial uncertainties regarding the assignment of responsibility in AI-mediated medical decision-making.7 The complexity of delineating accountability between AI systems, radiologists and healthcare providers emerged as a pivotal issue demanding resolution.

Addressing these intricacies is paramount for harnessing AI’s potential while upholding the integrity of patient care and professional practice in the evolving landscape of breast radiology.8–10 Most professionals favoured AI as a supportive tool, but divergent opinions arose regarding its optimal integration into the screening workflow. The authors delineated varied views on AI’s impact within the profession, stressing the absence of consensus on the extent of change and the consequent transformation of breast radiologists’ roles.6 Collaboration between human radiologists and AI assistance in radiology, expected to heavily impact the field, is under investigation. While AI tools show promise, biases in human use of AI limit potential gains. Radiologists should either solely rely on AI or work independently, rather than collaboratively.5 Additionally, optimal delegation policies are proposed, considering time costs and suboptimal use of AI information. Future research should explore AI-specific training for radiologists and organisational factors influencing human–AI collaboration. A pressing need exists to address multifaceted challenges, particularly in establishing clear ethical, legal and social frameworks governing AI integration in radiology.

In the second study by Tabaie et al,11 uncovered crucial contributing factors from patient safety event reports, showcasing the transformative potential of NLP algorithms in healthcare insights. This study’s findings involved identifying and categorising contributing factors within a decade’s worth of self-reported patient safety events from a multihospital healthcare system. These contributing factors pivotal in precipitating or permitting patient safety incidents, often remain concealed within the intricate narratives of these reports. The authors introduced a method to extract ‘information-rich sentences’ from reports, unveiling hidden contributing factors and refining their categorisation using NLP, leveraging unstructured data in patient safety event reports to isolate crucial sentences defining contributing factors.11 Automating the identification and categorisation of contributing factors empowers healthcare systems to proactively address safety concerns, fostering quicker responses and continuous improvement. However, the study’s reliance on data from a singular health system prompts inquiries about its generalisability. As healthcare increasingly embraces data-driven decision-making, harnessing NLP emerges as a pivotal strategy in safeguarding patient well-being.12–14 The findings call for further exploration and adoption of NLP-driven approaches to enhance patient safety initiatives globally.

While both studies mark significant strides in healthcare, certain considerations arise.6 11 The study on AI integration in breast radiology highlights uncertainties and the need for collaborative efforts in establishing clear governance frameworks. The retrospective nature of the NLP study calls for real-time validation and raises concerns about generalisability beyond a singular healthcare system.

Nonetheless, these studies underscore the transformative potential of technology in reshaping healthcare paradigms. Embracing AI in breast cancer screening and leveraging NLP for patient safety initiatives open avenues for proactive, data-driven decision-making. Further evaluation, exploration and widespread adoption of these technologies throughout their life cycle are pivotal in promoting patient safety and elevating healthcare quality, emphasising the central focus on integrating fairness and equity globally within healthcare.15 16

  • X: @MITCriticalData

  • Contributors: UI drafted the initial manuscript. Supervision was provided by Y-HEH, LAC and Y-CL. All authors approved the final manuscript as submitted and agreed to be accountable for all aspects of the work.

  • Funding: LAC is funded by the National Institute of Health through R01 EB017205, DS-I Africa U54 TW012043-01 and Bridge2AI OT2OD032701, and the National Science Foundation through ITEST #2148451.

  • Competing interests: None declared.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

Ethics statements

Patient consent for publication:

  1. close Dembrower K, Crippa A, Colón E, et al. Artificial intelligence for breast cancer detection in screening Mammography in Sweden: a prospective, population-based, paired-reader, non-inferiority study. Lancet Digit Health 2023; 5:e703–11.
  2. close Boxley C, Fujimoto M, Ratwani RM, et al. A text mining approach to categorize patient safety event reports by medication error type. Sci Rep 2023; 13.
  3. close Zheng C, Duffy J, Liu I-LA, et al. Identifying cases of shoulder injury related to vaccine Administration (SIRVA) in the United States: development and validation of a natural language processing method. JMIR Public Health Surveill 2022; 8.
  4. close Bitencourt A, Daimiel Naranjo I, Lo Gullo R, et al. AI-enhanced breast imaging: where are we and where are we heading. Eur J Radiol 2021; 142:109882.
  5. close Agarwal N, Moehring A, Rajpurkar P, et al. Combining human expertise with artificial intelligence: experimental evidence from Radiology. National Bureau of Economic Research 2023;
  6. close Högberg C, Larsson S, Lång K, et al. Anticipating artificial intelligence in mammography screening: views of Swedish breast Radiologists. BMJ Health Care Inform 2023; 30.
  7. close Lokaj B, Pugliese M-T, Kinkel K, et al. Barriers and Facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a Scoping review. Eur Radiol 2024; 34:2096–109.
  8. close Hickman SE, Baxter GC, Gilbert FJ, et al. Adoption of artificial intelligence in breast imaging: evaluation, ethical constraints and limitations. Br J Cancer 2021; 125:15–22.
  9. close Eriksson M, Román M, Gräwingholt A, et al. European validation of an image-derived AI-based short-term risk model for individualized breast cancer screening—a nested case-control study. The Lancet Regional Health – Europe
  10. close Gichoya JW, Thomas K, Celi LA, et al. AI pitfalls and what not to do: mitigating bias in AI. Br J Radiol 2023; 96.
  11. close Tabaie A, Sengupta S, Pruitt ZM, et al. A natural language processing approach to Categorise contributing factors from patient safety event reports. BMJ Health Care Inform 2023; 30.
  12. close Fong A. Realizing the power of text mining and natural language processing for analyzing patient safety event narratives: the challenges and path forward. J Patient Saf 2021; 17:e834–6.
  13. close Melton GB, Hripcsak G. Automated detection of adverse events using natural language processing of discharge summaries. J Am Med Inform Assoc 2005; 12:448–57.
  14. close Ozonoff A, Milliren CE, Fournier K, et al. Electronic surveillance of patient safety events using natural language processing. Health Informatics J 2022; 28.
  15. close Wawira Gichoya J, McCoy LG, Celi LA, et al. Equity in essence: a call for operationalising fairness in machine learning for healthcare. BMJ Health Care Inform 2021; 28.
  16. close Shortliffe EH. Role of evaluation throughout the life cycle of biomedical and health AI applications. BMJ Health Care Inform 2023; 30.

  • Received: 3 April 2024
  • Accepted: 2 May 2024
  • First published: 5 June 2024