Skip to main content

A Method to Automate the Discharge Summary Hospital Course for Neurology Patients

11/17/2023 • Journal of the American Medical Informatics Association • License: Oxford University Press Standard Journals Publication Model
Vince C. Hartman (Cornell Tech, New York, NY) ; Sanika S. Bapat (Cornell Tech, New York, NY) ; Mark G. Weiner (Department of Medicine, Weill Cornell Medicine, New York, NY) ; Babak B. Navi (Department of Neurology and Feil Family Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY) ; Evan T. Sholle (Department of Population Health, Weill Cornell Medicine, New York, NY) ; Thomas R. Campion Jr (Department of Population Health, Weill Cornell Medicine, New York, NY)

Abstract

Objective: Generation of automated clinical notes has been posited as a strategy to mitigate physician burnout. In particular, an automated narrative summary of a patient's hospital stay could supplement the hospital course section of the discharge summary that inpatient physicians document in electronic health record (EHR) systems. In the current study, we developed and evaluated an automated method for summarizing the hospital course section using encoder-decoder sequence-to-sequence transformer models. Materials and Methods: We fine-tuned BERT and BART models and optimized for factuality through constraining beam search, which we trained and tested using EHR data from patients admitted to the neurology unit of an academic medical center. Results: The approach demonstrated good ROUGE scores with an R-2 of 13.76. In a blind evaluation, 2 board-certified physicians rated 62% of the automated summaries as meeting the standard of care, which suggests the method may be useful clinically. Discussion and Conclusion: To our knowledge, this study is among the first to demonstrate an automated method for generating a discharge summary hospital course that approaches a quality level of what a physician would write.

Clinical implications

Development and evaluation study at NewYork-Presbyterian/Weill Cornell Medical Center (2,600-bed quaternary-care teaching hospital) using institutional repository data from neurology inpatient unit. Dataset: 6,600 hospital admissions from 5,000 unique patients (2010-2020). Study built upon prior AMIA 2022 feasibility study that used MIMIC-III ICU data. Current study expanded to full hospitalization neurology patients with comprehensive evaluation including physician quality assessment. Key findings: ROUGE-2 score of 13.76 indicating good textual overlap. Blind evaluation by 2 board-certified physicians: 62% of automated summaries rated as meeting standard of care. Average quality rating 6.52/10 for AI-generated summaries vs 8.16/10 for physician-written summaries. Automated summaries shorter than physician-written by average 170 words. Study used 'day-to-day approach' segmenting notes by clinical day to overcome transformer limitations with long-form documents. Approach demonstrated clinical validity for complex neurology patients known for higher clinical complexity. Study among first to demonstrate automated discharge summary generation approaching physician-level quality. Physicians spend approximately 2 hours in EHR for every 1 hour of patient care; discharge summaries crucial for transition of care but can be delayed, increasing risks of rehospitalization and medication errors. Previous automated summarization efforts limited in scope or lacked comprehensive evaluation. Current study addresses these gaps with real-world neurology EHR data, state-of-the-art benchmarks, and physician expert evaluation. Results suggest potential for semi-automated workflow where physicians review and correct AI-generated summaries. Clinical setting: inpatient neurology patients at academic medical center, specialty known for higher clinical complexity than general medicine or ICU-only populations.

This product uses publicly available data from the U.S. National Library of Medicine (NLM), National Institutes of Health, Department of Health and Human Services; NLM is not responsible for the product and does not endorse or recommend this or any other product. Links to third-party publications are provided for research discovery.
← Back to Research