Skip to main content

Artificial Intelligence in Emergency Medicine: Viewpoint of Current Applications and Foreseeable Opportunities and Challenges

5/23/2023 • Journal of Medical Internet Research • License: CC BY 4.0
Gabrielle Chenais (Bordeaux Population Health Center, INSERM U1219, Bordeaux, France) ; Emmanuel Lagarde (Bordeaux Population Health Center, INSERM U1219, Bordeaux, France) ; Cédric Gil-Jardiné (Bordeaux Population Health Center, INSERM U1219, Bordeaux, France)

Abstract

Emergency medicine and its services have reached a breaking point during the COVID-19 pandemic. This pandemic has highlighted the failures of a system that needs to be reconsidered, and novel approaches need to be considered. Artificial intelligence (AI) has matured to the point where it is poised to fundamentally transform health care, and applications within the emergency field are particularly promising. In this viewpoint, we first attempt to depict the landscape of AI-based applications currently in use in the daily emergency field. We review the existing AI systems; their algorithms; and their derivation, validation, and impact studies. We also propose future directions and perspectives. Second, we examine the ethics and risk specificities of the use of AI in the emergency field. Emergency departments (EDs) and related services such as intensive care units and emergency medical dispatch (EMD) have recently been in the spotlight owing to the COVID-19 pandemic. The fragility of the emergency system has been exposed by overcrowded services, extensive waiting times, and exhausted staff struggling to respond to exceptional situations. AI techniques have already been shown to be promising for improving diagnosis, imaging interpretation, triage, and medical decision-making within an ED setting. This viewpoint reviews current AI applications in emergency medicine, discusses the transformer architecture and natural language processing advancements, examines AI applications for documentation and public health surveillance, and addresses ethical considerations for trustworthy AI deployment in emergency settings.

Clinical implications

Comprehensive viewpoint article examining AI applications in emergency medicine settings including emergency departments (EDs), intensive care units, and emergency medical dispatch (EMD). COVID-19 pandemic exposed fragility of emergency systems through overcrowding, waiting times, and staff exhaustion, highlighting need for innovative approaches. AI demonstrates promise for improving diagnosis, imaging interpretation, triage, and medical decision-making in ED settings. Key application areas identified: Emergency medical dispatch (EMD) with Corti framework using automatic speech recognition (ASR) and out-of-hospital cardiac arrest (OHCA) detection models - Corti's end-to-end model recognized OHCA faster than human dispatchers. Clinical documentation burden significant: healthcare professionals spend up to 50% of time documenting in EHRs, with documentation tasks inducing poor and inconsistent data that may impact care quality. AI offers improvement levers through autocompletion combining automatic annotation with clinical concept labels. Public health surveillance: automatic signal extraction from EHRs would enable real-time monitoring and ensure responsive surveillance systems. Transformer-based NLP models can bypass difficulties in extracting fine-grained standardized data from free text entries in ED and EMD. Ethical framework emphasized: trustworthy AI must be safe and fair with managed biases, transparent and accountable, explainable and interpretable, protect human autonomy, and be privacy-enhanced. Implementation challenges require holistic sociotechnical approach with participatory design and multistakeholder engagement including end users from diverse disciplines, expertise, backgrounds, and cultures. Critical appraisal needed: independent evaluation by objective entities during development and use phases to assess whether AI clinical solutions impact patient outcomes. Healthcare professionals under AI systems should not cause physical or psychological harm or endanger human life, health, property, or environment under defined conditions.

This product uses publicly available data from the U.S. National Library of Medicine (NLM), National Institutes of Health, Department of Health and Human Services; NLM is not responsible for the product and does not endorse or recommend this or any other product. Links to third-party publications are provided for research discovery.
← Back to Research