Like it or not, artificial intelligence (AI) is here to stay, and its impact on both the science and practice of endocrinology will soon be quite evident. The ENDO 2024 Plenary session, “Artificial Intelligence in Health and Biomedical Research: The Future Is Now,” will no doubt answer many questions on the minds of endocrinologists in the audience. However, it will likely raise even more questions regarding its implementation, influence, and, most importantly, its outcomes.
In May 2023, a column appeared in The New York Times titled, “We Are Opening the Lids on Two Giant Pandora’s Boxes.” The columnist, Thomas L. Friedman, argued that artificial intelligence and climate change are the two Pandora’s boxes, each giving humanity “godlike” abilities to exceed brainpower and to drive ourselves from “one climate epoch to another,” respectively.
Pandora’s box, of course, is the container in Greek mythology that, when opened, released all manner of strife and blight upon humanity. Climate change has been a concern among experts for years, but artificial intelligence (AI) is now starting to make headlines daily with the rise of easily accessible products like ChatGPT and Bard. And those headlines about AI can range from questionable and unethical (a BestColleges survey of 1,000 college students found that 56% used AI on assignments or exams) to terrifying (the U.S. Air Force denied a report that an AI drone “killed” its operator because it didn’t agree with the orders it was given).
But Pandora’s box also contained hope. Artificial intelligence has been shown to predict gestational diabetes, extend time in range and reduce hypoglycemia events in patients with type 1 diabetes, improve detection of fractures in patients with osteoporosis, reduce unnecessary thyroid surgeries by better detecting benign nodules, and predict how patients with acromegaly respond to first-generation somatostatin receptor ligands.
A plenary session at ENDO 2024 titled “Artificial Intelligence in Health and Biomedical Research: The Future Is Now” will present attendees with the benefits and risks of AI and its uses in clinical care, education, and research.
“I think we’re in an interesting time,” says Casey S. Greene, PhD, a professor Biomedical Informatics at the University of Colorado School of Medicine and one of the presenters of this ENDO plenary. “Somehow, over the last 14 months, people have gotten incredibly engaged in discussions about AI. People were enthusiastic about AI before, but with the release of ChatGPT, the level of enthusiasm went from very enthusiastic to beyond what I could have possibly imagined. These technologies, artificial intelligence and machine learning, have enormous potential to improve care if we use them thoughtfully.”
Enthusiasm, Pessimism, and Realism
Artificial intelligence is here to stay – pros, cons, and everything in between. And it will only get more pervasive as technologies improve, meaning clinicians, researchers, and educators will become at least somewhat comfortable with using AI in some capacity at some point.
Su-In Lee, PhD, the Paul G. Allen Professor of Computer Science at the University of Washington in Seattle, also presenting at the ENDO session, tells Endocrine News that AI is revolutionizing numerous aspects of our lives, science, and society. “We’ve witnessed numerous journal publications showcasing high predictive accuracies, often rivaling or surpassing human experts, particularly in fields like medicine,” she says. “Generative AI, capable of producing realistic data based on learned patterns from training datasets, holds promise for transforming scientific discovery and clinical decision.”
Lee is presenting research from her lab on explainable AI, a subfield of AI focused on enhancing the interpretability of complex machine learning models in the biomedical sciences. These techniques serve as general frameworks, she says, applicable to a diverse range of problems in biomedicine, including endocrinology. “This discussion aims to shed light on the necessary enhancements for explainable AI to effectively tackle a wide array of real-world challenges in biomedicine,” she says.
Greene says that he usually sees people put themselves into one of two camps when it comes to AI: one camp is enthusiastic and wants to deploy AI in the clinic or lab as soon as possible, and the other camp is pessimistic, claiming AI is harmful and shouldn’t be allowed anywhere near patients.
“What I hope we do in this session is end up with, ‘Okay. What do I need to know if I want to think about this technology? How worried should I be? What are the things that keep Casey up at night?’” Greene says. “Then, I hope we can use those as a launching point to discuss current issues. ‘Okay. Here’s what’s possible now. Here’s what the risks are doing that. Here’s how to do that thoughtfully.’ In short, how do we take this incredible level of enthusiasm or pessimism and end up in a place of realism?”
Representation in Technology
And again, to reach that place of realism, much thought and consideration is important when implementing artificial intelligence. Greene says these systems are extremely good at extracting patterns, even those too subtle for humans to detect. These systems are powerful and capable of going beyond human limitations, but that’s where the risks come into play. For instance, when building an AI system that works from pictures: “It becomes more important that the training data that go into building these systems are representative so that the benefit can accrue equitably,” he says.
“These technologies, artificial intelligence and machine learning, have enormous potential to improve care if we use them thoughtfully.” — Casey S. Greene, PhD, professor of biomedical informatics, University of Colorado School of Medicine, Aurora
Greene points to the book Invisible Women and its examples in urban planning to show how inequitable representation in data can drive inequitable outcomes. A commuter trip to the office was deemed essential; a trip to the grocery store was not. Essential trips versus non-essential trips parallel cleanly with past stereotypes of male-dominated activities versus female-dominated activities. Designing transportation around the essential and non-essential concepts meant that systems designed, on their face, to be gender-neutral were heavily biased.
The same problem has plagued biomedical research for over a century – the “average human” in research is a male. “When this underlies the data that we use to develop and test interventions, if they work for a man and a woman, great,” Greene says. “If they only work for a man that still may get deployed; if they only work for a woman, we’re probably never going to learn about it. This isn’t specific to AI – this has occurred with human intelligence. But bringing AI in creates the possibility of bias laundering – we can end up with systems that we say are unbiased but where the bias is baked in. We must be much more thoughtful and careful about representation than we have been in the past to build technologies that provide equitable benefits.”
The Serendipity Business
Earlier AI image systems had difficulty differentiating between a picture of a blueberry muffin and a picture of a chihuahua’s face, so much so that it became a joke in artificial intelligence and eventually a meme. So, while people were excited about the transformative potential of this technology, that excitement can be tempered by a humorous but potentially dangerous AI mistake. “You have to be extremely thoughtful about how you use them because you don’t want to end up in a blueberry muffin/chihuahua situation in medicine,” Greene says.
Lee also points to the fact that she and her colleagues have encountered a concerning phenomenon with AI called “shortcut” learning. “For instance, in our ‘AI auditing’ efforts within radiology and dermatology, some of which I’ll discuss in my presentation, we’ve uncovered instances where AI models rely on shortcuts rather than genuine pathologies. This highlights the critical need to understand the reasoning process of AI models.”
Greene says that people are shifting their excitement from images to language, so he will discuss large language models in his talk. But he says he will center on the fact that no matter your opinion of artificial intelligence, you will encounter it not just in the clinic, classroom, or lab but in your everyday life. “I might not be popular for saying this, but I think the ship has sailed,” he says. “I’m not going to say for the better. I’m not going to say it’s for the worse. It just is.”
Greene likes to say he and his colleagues and peers are in the serendipity business, but serendipity in its original meaning, beyond just luck: being prepared, being thoughtful, being observant, and then having a moment of insight. “We want to build systems that produce serendipitous moments where exactly the right information surfaces at the right time to make the right decision,” he says.
Lee says her talk will highlight how explainable AI can revolutionize biomedical fields. “Rather than delving deeply into a few projects, I’ve aimed to make the presentation as approachable as possible for researchers across diverse fields—not just physicians or clinical scientists, but also biologists and computational scientists. As mentioned earlier, explainable AI techniques are versatile frameworks applicable to a wide range of problems in medicine and beyond,” she says.
Greene hopes those who attend his talk leave with different expectations than when they first come in. For those excited and enthusiastic about artificial intelligence, slamming the gas pedal down to move these things forward as quickly as possible, Greene wants them to leave with questions. On the other hand, Greene wants those slamming the brakes to leave with questions and an open mind. “Hopefully,” he says, “we move towards the more nuanced middle ground that we will need to develop and deploy systems that advance health equitably.”
Bagley is the senior editor of Endocrine News. In the March issue, he wrote about unlocking one of the confounding problems plaguing monogenic obesity in pediatric patients.