An artificial intelligence (AI) computer program reads physicians’ notes to estimate patients’ risk of death, hospital stay duration, and other important healthcare factors. The tool, described Wednesday in Nature, is being used in hospitals to predict the chances that a discharged patient will soon be readmitted.
While computers process linear information best, physicians typically write in creative, individualized language reflecting their thought process. However, a new type of AI, called a large language model (LLM), learns from text without requiring specially formatted data. LLMs use specialized computer algorithms to predict the best word to fill in a sentence based on how likely real people would be to use that word. The more data used to teach the computer to recognize word patterns, the more accurate its guesses become over time.
The researchers designed an LLM called NYUTron to use unaltered text from electronic health records to make assessments about patient health status. They trained NYUTron using millions of clinical notes collected from the electronic health records of 336,000 people who had received care within the NYU Langone hospital system between January 2011 and May 2020. The resulting 4.1-billion-word language cloud included radiology reports, patient progress notes, and discharge instructions written by doctors. Language was not standardized among physicians, and the program even interpreted abbreviations unique to particular physicians.
The program accurately predicted 80% of patients who were readmitted, a 5% improvement over a standard, non-LLM computer model that required medical data reformatting. NYUTron also identified 85% of those who died in the hospital — a 7% improvement over standard methods — and estimated 79% of patients’ length of stay — a 12% improvement over standard methods. The tool also successfully assessed the likelihood of additional conditions accompanying a primary disease (the comorbidity index) as well as the chances of insurance denial.
The researchers cautioned that NYUTron is a support tool, not intended to replace provider judgment tailored to individual patients. They said future studies may explore the model’s ability to extract billing codes, predict risk of infection, and identify the right medication to order, potentially freeing up physicians to spend more time with patients.
“These results demonstrate that large language models make the development of ‘smart hospitals’ not only a possibility, but a reality,” senior author Dr. Eric Oermann, assistant neurosurgery and radiology professor at NYU Langone Health, said in a statement.
“Our findings highlight the potential for using large language models to guide physicians about patient care,” lead author Lavender Jiang, doctoral student at NYU’s Center for Data Science, added. “Programs like NYUTron can alert healthcare providers in real time about factors that might lead to readmission and other concerns so they can be swiftly addressed or even averted.”