How AI for cervical cancer can help achieve healthcare equity

2021 10 07 22 49 0009 Computer Woman Ai 400

Despite cervical cancer being among the most treatable forms of cancer, it continues to kill hundreds of thousands of women globally each year.1 Nearly 90% of those deaths occur in low- to middle-income countries, including India and China, which account for about 50% of the total. Mortality rates are on the rise in Eastern Europe, too.2

Michael Quick, vice president for R&D/Innovation for the Diagnostic Solutions Division.Michael Quick, vice president for R&D/Innovation for the Diagnostic Solutions Division.

Thanks to screening programs and improvements in technology, mortality rates have been falling in high-income countries, yet rates within these regions still can affect populations drastically differently. For example, Black women in the U.S. are two times more likely to die from cervical cancer than white women.3

Reliable screening and prevention methodologies for cervical cancer have existed and been advancing for decades, and where they are implemented preventive screening measures increase and death rates decline. The primary issue is a matter of inequitable access to technology and care.

Recent advances in artificial intelligence (AI) medical products could help close the gaps in women's healthcare equity. But to do so, several significant barriers must be toppled. Using cervical cancer for case reference, this article explains some of the key obstacles standing in the way and offers perspective on how they could be overcome.

Obstacle: Remote and low-resource settings

Solution: Trust in AI for medical products

When the earliest AI medical products went into development and launched around the turn of the 21st century, many healthcare professionals envisioned a not-so-distant future in which they would be unemployed, having been replaced by technology. Some still do.

Likewise, many people feared a healthcare world in which a dispassionate "robot" would evaluate, diagnose, and treat them, and overlook their individually unique characteristics and needs. These fears remain, even though AI products built with good data representative of patient populations usually perform at least equivalent to humans. Studies show many people continue to mistrust AI technology and prefer to have a human professional involved in and ideally leading their diagnosis.4,5,6

However, some scientists saw the opportunity to leverage AI technology to evolve the role of healthcare professionals and improve the health of populations -- and that's what happened. In real terms, the availability of AI to help screen for cervical cancer has effectively promoted practitioners, while improving result accuracy and ultimately saving lives. It also saves time, since the healthcare provider can focus on pinpointed areas of potential abnormality, rather than trying to examine hundreds of thousands of individual cells. As a result, review of each case is more efficient, which means more women can be screened.

In fact, medical technology companies developing AI products generally are focused on providing tools to help clinicians do their jobs, and to provide access to such expertise where it does not currently exist. In regions with too few or too far-removed pathology experts, AI can help expand access and save people's lives. As long as qualified technicians, quality diagnostic imaging equipment and processes are in place, pathologists can review results and collaborate remotely with in-the-field medical counterparts.

In general, the higher incidence rates of cervical cancer are linked to regions lacking established screening programs because of cost (i.e., the actual cost of the medical technology or the individual's inability to pay for care) and/or infrastructure, meaning the region does not have the equipment or trained experts available to make a diagnosis. While this is often the case in developing nations, it also is true for high-income countries, like the U.S., where rural and low-income urban communities lack access to quality healthcare.

AI medical technology can help bridge these gaps by enabling sample collection and imaging to take place locally, and then sent digitally to another region for expert review or confirmation. In areas with especially limited local and remote pathology resources, AI could be leveraged as the first line of preventive defense to sort results, so pathologists are only reviewing those deemed as potentially cancerous.

For such programs to be effective, however, practitioners and patients need to accept AI as a solution. To this end, educational programs need to be implemented to help people understand the very real benefits of being screened by a highly accurate and proven AI technology versus the potentially deadly consequences of not being screened at all.

Expanding screening and prevention

The introduction of the Pap test has contributed to a decline in cervical cancer rates of more than 60% since the 1950s.7 Since then, HPV has been identified as a cause of cervical cancer, and HPV testing and vaccinations have been developed.

For reference, all women are at risk for cervical cancer, which occurs most often in women over age 30.8 At least half of sexually active people will have HPV at some point in their lives, but few women will get cervical cancer. The huge disparity among populations in morbidity and mortality is primarily due to poor access to and poor quality of services for prevention and control.

To screen for HPV and cervical cancer, clinicians collect cells from a woman's cervix and examine the cells for abnormalities. If abnormalities are identified, the woman will undergo further testing, such as a colposcopy or biopsy, to determine if precancerous or cancerous cells are present. If they are, they usually can be removed through a very minor in-office procedure.

For reference, in the past, a cytotechnologist looked through a microscope at stained cells to identify potential abnormalities for further assessment. Today, for diagnostic and screening medical products, AI algorithms are programmed to filter through the collected cell sample and flag areas of potential abnormalities for review. Then they evaluate the screening results together with HPV test results and an individual's clinical history to gauge the potential of disease. Further advances in diagnostic systems enable the test samples to be digitized with 3D scanning and analyzed using an AI deep-learning system.

Obstacle: How AI/DL works

Solution: Testing and setting precedence

As is commonly known, regulators review medical products for safety and efficacy. As part of the process, they need evidence to show the product works as intended and explanation to know how and why it works, what might cause it to fail, and what happens if it fails. In the case of AI medical products using machine learning (ML), this can be done relatively straightforwardly. For ML, scientists built predictive models for diagnostics by teaching a computer to identify specific features, e.g., size, shape, color, object, etc., of a condition. For the machine's purposes, those features are tangible and can be defined numerically. So, for example, the machine can be taught to analyze an image and look for, say, a specific scale of darkness.

Now, as with the increasing availability of computational power and comprehensive data, scientists also are using deep learning (DL). DL uses a different methodology to learn in which it automatically processes and sorts data to create an algorithm. Explaining how exactly the machine does this can be more difficult, often thought of as a "black box." This lack of insight into the inner workings of AI/DL concerns regulators. As a result, regulators are grappling with how to assuredly assess the safety and efficacy of AI medical products.

So far, however, regulators have helped medical technology companies step over this obstacle by relying on demonstrative clinical studies and research to evaluate design integrity and clinical performance. Along with preliminary approval, regulators increasingly are emphasizing the need for medical technology companies to collect and monitor real-world data as further evidence, with recent guidances published in the U.S., European Union, and Japan, among others.

Obstacle: Potential bias of data inputs

Solution: Multiple data sources, real-world evidence and data

Collecting and monitoring real-world evidence should also help quell concerns about whether AI products might perform with inherent gender, race, or ethnic bias. To comply with the various data and patient privacy laws, the data used generally in AI does not contain any information that could potentially be used to identify a patient.

As a result, manufacturers can take measures, such as collecting data from multiple reliable sources, but cannot fully address whether the data itself is representative of diverse demographics or if some populations are underrepresented. (This is also why some regulatory bodies require medical technology companies to hold clinical studies with local populations before granting approval.)

So, while the validation can be done in different ways, the AI/DL algorithm training cannot be. This fact is leading regulators to question if an AI medical product will provide the same results for people of different backgrounds.

It is a good question -- and one that recently and infamously came to bear with the facial recognition technology used by law enforcement and security professionals worked well enough for white males but failed too often to accurately identify women and people of color.

As more AI medical device technologies launch and accumulate real-world evidence of safety and efficacy, it stands to reason that regulatory bodies will become more comfortable with the complex technology. This in turn will pave the way for manufacturers to incorporate AI more readily into innovative medical products.

Obstacle: Inequality of quality products

Solution: Proactive business development strategies

So far, science cannot invent empathy or engineer advocacy, but it has invented and engineered AI and the enormous computational power required to fuel it for diagnostic and therapeutic medical products. As medical technology companies develop innovative medical products, they can, and ethically should, take a leading role in democratizing access to quality medical products for people around the world.

The medical technology companies already doing this use creative and prudent business strategies that enable them to produce high-quality, innovative technologies and scale them to meet a spectrum of healthcare needs. In the case of women's health solutions, for example, the technologies developed for increasing the efficiency and accuracy of cervical cancer screening in the U.S. and Europe can be leveraged to increase basic access to low-resource settings around the world in countries like China and India. The digitization and utilization of AI can bring the highest quality of care to women around the world.

Beyond "doing the right thing," this approach offers many practical business advantages. For a company that may usually avoid emerging markets, this approach gives it a new potential revenue stream. For a company already launching products in emerging markets, it enables it to streamline its product development resources, while continuing to expand its market reach. For both, it provides more real-world data and evidence, including from diverse populations, with regard to safety and efficacy of the AI medical products.

Overall, medical products should ideally be held to globally accepted standards. From a corporate responsibility and mission perspective, this means committing to providing people with access to the best possible standard-of-care solution wherever they live. To do this, medical technology companies need to adopt forward-thinking development strategies, and healthcare professionals and regulators will need to get comfortable with what currently makes them uncomfortable, so medical technology can continue advancing and reaching the people who need it most.

References

  1. Cervical cancer. The World Health Organization. https://www.who.int/health-topics/cervical-cancer. Accessed October 7, 2021.
  2. Cervical cancer. The World Health Organization Regional Office for Europe. https://www.who.int/europe/news-room. Accessed October 7, 2021.
  3. Addressing the cervical cancer screening disparities gap. Contemporary OB/GYN. https://www.contemporaryobgyn.net/view/addressing-the-cervical-cancer-screening-disparities-gap. Accessed October 7, 2021.
  4. AI can outperform doctors. So why don't patients trust it? Harvard Business Review. https://hbr.org/2019/10/ai-can-outperform-doctors-so-why-dont-patients-trust-it. Accessed October 7, 2021.
  5. Cadario, R, Longoni, C, Morewedge, CK. Understanding, explaining, and utilizing medical artificial intelligence. Nat Hum Behav (2021). https://doi.org/10.1038/s41562-021-01146-0.
  6. Asan O, Bayrak AE, Choudhury A. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res. 2020;22(6):e15154 (2020). https://doi.org/10.2196/15154.
  7. SEER Stat Fact Sheets: Cervix Uteri Cancer. National Cancer Institute.https://seer.cancer.gov/statfacts/html/cervix.html. Accessed October 8, 2021.
  8. Basic Information about HPV and Cancer. Centers for Disease Control and Prevention.https://www.cdc.gov/cancer/hpv/basic_info/. Accessed October 7, 2021.
  9. Cervical cancer. The World Health Organization Regional Office for Europe. https://www.who.int/europe/news-room. Accessed October 7, 2021.

Since joining Hologic in 1996, Michael Quick has held diverse leadership roles in clinical applications, commercial sales and marketing, and international business development including emerging markets. Quick now serves as the vice president for R&D/Innovation for the Diagnostic Solutions Division. In this role, he is responsible for leading Hologic's new product development which leverages the success of the ThinPrep cytology portfolio to drive growth globally. Quick is a board-certified cytotechnologist.

The comments and observations expressed are those of the author and do not necessarily reflect the opinions of LabPulse.com.

Page 1 of 3
Next Page