AI concurs with humans in heart failure test, suggesting scalable approach to clinical trials

Computer Heart Social

Artificial intelligence (AI) may make clinical trials more efficient by accurately identifying events at scale, a retrospective analysis has found.

The analysis, details of which were published in the journal JAMA Cardiology, applied a natural language processing (NLP) model for adjudicating heart failure hospitalizations to records from a clinical trial that ran from 2016 to 2019. The model was originally developed and tested in one healthcare system and then applied retrospectively to a study of the effects of a flu vaccine on heart failure patients.

Randomized clinical trials in heart failure typically task a group of physicians with manually reviewing records to determine whether a case meets the criteria. That process for defining heart failure outcomes is time-consuming, creating a need for an alternative that works more efficiently at scale.

The retrospective analysis suggests that NLP models may meet that need. Initially, the researchers saw an 87% rate of agreement between the NLP model and the group of physicians. Fine-tuning raised the agreement rate to 93%. The researchers discussed how the model could be applied prospectively to clinical trials.

“Combining NLP and human adjudication may be a practical approach. One easily implemented strategy is to manually adjudicate a subset of cases (for example, 20%) with equivocal NLP scores and to trust the NLP for hospitalizations with very high or low scores. This approach yielded 94% accuracy in our study while reducing manual adjudications by 80%,” the researchers wrote.

JAMA shared an editorial in conjunction with the publication of the NLP study. The editorial provides an overview of the findings of the NLP study and puts the results in the context of a broader push to apply AI to clinical trials. While the JAMA authors see opportunities for AI to improve study processes such as recruitment, consent, and data analysis, they also see risks in greater reliance on technology.

“Because these new technologies also carry significant risks, including the risk of exacerbating inequities, promoting open science and improving understanding of the ‘black box’ of AI is a must. As we learn, we must find ways to share weaknesses, flaws, or failures without undermining the integrity of the process,” the editorial states.

Page 1 of 17
Next Page