News

The ‘superhuman’ potential of AI in medicine

How UChicago experts are using machine learning to augment breakthroughs and improve patient care

Shortly after receiving her doctorate in medical physics, Maryellen Giger, PhD, watched the medical imaging field undergo a radical shift.

As part of a group in the Kurt Rossman Laboratories directed by Kunio Doi, PhD*, now an emeritus professor in the Department of Radiology, Giger began to use digitized film — then a new phenomenon — to train computers to find the “signal” (a potential tumor or lesion) among the complexities of human tissue, aiding radiologists in their diagnoses.

The breakthrough concept was in its infancy. Datasets used for training were small, and the computers were slow with limited storage capacity.

Still, Giger and others were able to develop algorithms that harnessed the power of early neural networks — artificial intelligence (AI) programs inspired by the human brain that can be trained to analyze data and find patterns — that could accurately perform the same diagnostic task.

This story appeared in Medicine on the Midway magazine. Read the Fall 2024 issue here.

In 1994, she and her colleagues published the first papers on computer-aided mammography. Their work ultimately led to the first computer-aided system to detect breast cancer that received clearance from the Food and Drug Administration.

“University of Chicago has been at the forefront of using AI in medical imaging ever since,” said Giger, the A.N. Pritzker Distinguished Service Professor of Radiology.

Flash-forward 30 years: Technological leaps and massive datasets continue to elevate AI and machine learning techniques, and generative AI models have pushed the field forward.

The University of Chicago Medicine recently piloted the use of a generative AI-powered clinical documentation tool among 200 providers. The tool, used only with patient consent, has already led to significant reductions in physician burnout, as well as higher patient satisfaction.

At UChicago, physicians and scientists are committed to ensuring the medical applications of AI — including disease detection, device development, workload management and designing new therapies — are safe, bias-free and readily available to the populations that need it.

“If we don’t inform ourselves about how our algorithms are working, about the potential risks, about demanding accountability, then we risk that algorithm implementation happens to us rather than by us,” said Alexander Pearson, MD, PhD, Associate Professor of Medicine.

“We need to judiciously select the cases where we think using a computational tool is additive to augment science and patient care.”

Complex analysis — with a human touch

It came as no surprise that Giger’s early diagnostic technology was embraced by radiologists and patients, eager for as much information as they could get.

But Giger soon found that the original intent — to provide a second opinion about a radiologist’s own work — was lost on some in the field. Some radiologists, she said, used these early AI systems as a primary diagnostic tool, or they were analyzing the wrong kinds of images as part of their medical decision making.

“In the beginning, some AI developers who were not in medicine started making comments like, ‘In five years, we won’t need radiologists,’” Giger said. “We know that is not true. We need the knowledge of radiologists, the domain experts, to develop AI.

“In addition, you need to know your intended population, and you need to know the clinical question you hope to answer.”

READ MORE: Accelerating the discovery of new cancer therapies using AI

Giger has continued to develop computer-aided analytics and software for mammograms, ultrasounds and MRIs.

In 2010, she and her team developed QuantX, software that analyzes MRIs using AI and a large reference database collected over decades to help radiologists assess cancerous and noncancerous breast lesions. The software received FDA clearance in 2017 as the first computer-aided diagnosis system, and it was named a “Best Invention” by TIME magazine.

Pearson, a statistician and physician, arrived at UChicago the same year as computer-aided cancer detection efforts received a boost from the increasing use of convolutional neural networks (CNNs), which include several layers of neural networks to learn from data and identify features in images. Some layers can find features like colors, while others find more complex elements, such as faces.

“One of the first questions we asked was, ‘Could AI pick up on patterns accurately enough to accomplish superhuman tasks?’” Pearson said.

Armed with massive amounts of data from the Cancer Genome Atlas, Pearson and his collaborators trained hundreds of models to create an algorithm that can detect molecular alterations from routine pathology images of tumors — something the human eye is unable to do.

The result: Physicians could tailor treatment options for those specific genetic alterations without doing additional testing of the tumors.

“It was not a panacea; it did not replace next- generation sequencing [to diagnose disease],” Pearson said. “But it gave us the knowledge that AI could comprehensively provide more insights from existing data.”

AI provides diagnosis and treatment insights

Finding deep insights in existing images also drives the work of Madeleine Torcasso, PhD. A recipient of the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, Torcasso is working to understand how immune cells — and where they exist in relation to a tumor — affect the outcome of cancer treatments.

Aided by advances in microscopy and chemistry, “You can get a much higher molecular content than ever before,” said Torcasso, who is based in Giger’s lab at UChicago. “As the technology gets better, there will be even more complexity in that image — and that’s where AI comes in, to describe what’s happening.”

Torcasso uses CNNs to analyze images and map out where immune cells are in relation to a tumor. She and other researchers hope that these details will help them better understand the pathology of the cancer and how it might respond to immunotherapy.

Consider the characteristics of triple-negative breast cancer, an aggressive disease that lacks the estrogen or progesterone receptors of other breast cancers. This lack of markers makes immunotherapy an attractive treatment option.

Torcasso is examining tissue images of these breast cancer patients to map the distributions of immune cells in the tumor area, comparing those who have received immunotherapy and those who haven’t.

“We want to find the commonalities that will predict the response to immunotherapy,” she said. “It will also help us understand the pathology as a whole.”

Isabelle Hu, PhD, is on a similar quest. At Tempus AI, a precision medicine company, Hu is applying what she learned at UChicago — medical image analysis using machine learning algorithms — to find molecular alterations within cancerous tumors.

For example, she has worked to analyze prostate cancer pathology images to predict microsatellite instability, a common biomarker that signals whether patients are more likely to respond to immunotherapy. Tempus AI has now launched an algorithm from this research.

“At the University of Chicago, I could see the tangible impact this kind of research can have on patients,” said Hu, who was advised by Giger during her graduate studies. “I was really motivated by that. And now these digital pathology algorithms are being deployed to make an impact on patients’ lives. It’s really exciting to see the advancements happening every day.”

Avoiding bias and misuse

Like any new technology, AI and machine learning faces hard questions about potential biases and ethics violations. Heather Whitney, PhD, an Assistant Professor of Radiology, spent a year as a MacLean Center Clinical Ethics Fellow thinking about those questions.

Consider cancer diagnoses. An AI model won’t simply call a lesion cancerous or not. It offers a prediction: This lesion has a 90% likelihood of being cancer. That’s information a patient might want. But what if the number is 50%, or 30%? What if the cancer is rare?

“Studying how a physician interacts with this information is really important,” said Whitney, adding that medical frameworks for ethical decision-making need to be updated for working with AI. “How much weight do they put on those numbers? How does that affect how they move forward with treatment? We need to consider how information from AI is used.”

Whitney, along with Giger, has also studied the off-label use of AI in medicine. Similar to how physicians might prescribe drugs for conditions other than their intended use, they could feel compelled to use a medical imaging diagnostic tool in the same way.

“In my view, it’s really the same kind of principles that are used in medicine: beneficence, nonmaleficence, justice, fairness,” said Whitney, who sat in on UChicago Medicine’s ethics consultation service meetings as part of her fellowship. “It’s so easy to ‘plug and chug’ with AI, but now is the time to see how this maps to those standards we really know about.”

Part of that discussion involves understanding what biases are brought into algorithms, to better understand how the technology makes associations.

Pearson discovered that divide firsthand when he studied a dataset of tumors in an attempt to discern if ancestry affected the way tumors grew.

READ MORE: Detecting machine-written content in scientific articles

The initial answer was yes. But Pearson and his team soon found that their AI model had identified which institution had submitted the images of the tumors via technical watermark.

The algorithm discovered that some hospitals had high number of patients from a single race. It then used that watermark as a stand-in for patient characteristics, including ancestry. Needless to say, the model got the ancestry — and the final predictions — wrong.

“We showed all kinds of relationships that could be misconstrued from this same kind of digital pathology effect,” Pearson said. “It does not have to be that way. You can build a model that
is less intrinsically biased — it needs to be vetted and validated to make sure it isn’t creating these relationships that we can’t anticipate.”

Medical AI and the greater good

AI and deep learning have the potential for wider use outside of institutions with huge databases and resources. Pearson and his team recently showed that a low-cost, 3D-printed microscope, paired with a cloud-based AI platform, could provide the same pathology accuracy as a much costlier microscope.

“You can imagine a scenario where this allowed a clinician in a low-income region to take a patient slide and get an instant diagnosis without any delays,” Pearson said. “That would allow this technology to benefit a global population — not just us in a first-world, high-income healthcare system.”

On the UChicago campus, Giger is leading an effort to level the playing field through the Medical Imaging and Data Resource Center (MIDRC), which curates a massive data commons of more than 500,000 medical imaging studies.

Originally funded by the National Institutes of Health to better understand COVID-19, the Center has expanded its image collection to include cancer, and it has developed AI tools and algorithms to help researchers develop trustworthy AI.

To date, the MIDRC has released images from more than 170,000 studies that can be used by physicians and researchers worldwide to develop AI. The work has the potential to improve disease detection and reduce the number of unneeded biopsies.

“This has been a dream for decades — to give people the tools to help them with their training and test sets, and to teach them about bias awareness,” Giger said. “It will give all investigators data so everyone can develop AI models.”

Decades after Giger’s early discoveries, the potential of AI remains vast.

Deep learning algorithms, already scanning radiology and pathology images, are also pairing that information with patient demographics and disease characteristics. Soon, AI could help physicians adapt detailed treatment plans.

Pearson, who is working to expand data science and machine learning curriculum for students at the Pritzker School of Medicine, views the future of healthcare AI with optimism — and caution.

“As algorithms become part of standard practice, it is imperative that physicians understand when algorithms are broken or when they are unreliable,” Pearson said. “We want to make sure Pritzker trainees have the full portfolio of skills to interact with them to best serve their patients.”

https://givetomedicine.uchicago.edu/2022/06/gift-supports-next-generation-of-leaders-in-medical-imaging-cancer-diagnosis/

Explore the Biological Sciences Division