AI foundation models set new benchmarks in computational pathology

Ai Woman Social

Mass General Brigham researchers have published a pair of papers on artificial intelligence (AI) systems that could improve tasks that include the analysis of pathology images and identification of rare diseases.

The papers, both of which were published in Nature Medicine, describe two “foundation models.” The term refers to AI models that are trained on a large amount of data and then adapted to carry out a wide range of activities. For example, GPT-3.5 is the foundation model that underpins the conversational chat agent ChatGPT.

Faisal Mahmood, PhD, an author of the two papers and part of the division of computational pathology at Mass General Brigham, outlined the medical significance of foundational models and implications of his research in a statement.

“Foundation models represent a new paradigm in medical artificial intelligence,” Mahmood said. “These models are AI systems that can be adapted to many downstream, clinically relevant tasks. We hope that the proof-of-concept presented in these studies will set the stage for such self-supervised models to be trained on larger and more diverse datasets.”

One of the Nature Medicine papers describes a foundation model for understanding pathology images. Mahmood and his collaborators trained the model, which they call UNI, on more than 100 million images from over 100,000 diagnostic stained whole-slide images. The 77-terabyte training database covered 20 major tissue types.

The researchers tested the performance of UNI on 34 computational pathology tasks of varying levels of diagnostic difficulty, showing that it outperformed “previous state-of-the-art models” and demonstrated new modeling capabilities such as resolution-agnostic tissue classification. Buoyed by the findings, the authors said the model can “generalize and transfer to a wide range of diagnostically challenging tasks.”

The other paper outlines work on a visual-language foundation model for computational pathology. For that project, the Mass General Brigham researchers used histopathology images, biomedical text, and more than 1.17 million image–caption pairs to train a foundation model.

Tested against 14 benchmarks, the CONtrastive learning from Captions for Histopathology (CONCH) achieved “state-of-the-art performance on histology image classification, segmentation, captioning, and text-to-image and image-to-text retrieval,” the authors wrote. After seeing a “substantial leap” over other systems, the researchers predicted CONCH could “directly facilitate a wide array of machine learning-based workflows requiring minimal or no further supervised fine-tuning.”

The researchers are making the code publicly available for other academic groups to use.

Page 1 of 17
Next Page