Most people captured by artificial intelligence have all had something of an “aha” moment that opens their minds to a world of opportunities. During the inaugural RAISE Health symposium on May 14, Lloyd Minor, MD, dean of the Stanford School of Medicine and vice president for medical affairs at Stanford University, shared his.
Asked to summarize a discovery he’d made related to the inner ear, a curious Minor turned to generative AI. “I asked, ‘What is superior canal dehiscence syndrome?’” Minor told a group of nearly 4,000 symposium attendees. In seconds, a few paragraphs appeared.
“They were good — really good,” he said. “The information was brought together into a concise and, by and large, accurate and well-prioritized description of the disorder. It was quite remarkable.”
Minor’s excitement was shared by many at the half-day event, which was born of the RAISE Health initiative, a project launched by Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI in biomedical research, education and patient care. Speakers explored what it means to bring AI into the folds of medicine in a way that’s not just helpful for physicians and scientists, but transparent, fair and equitable for patients.
“We believe this is a technology to augment and enhance humanity,” said Fei-Fei Li, a professor of computer science at the Stanford School of Engineering who leads RAISE Health with Minor and is the co-director of HAI. From generating new molecular sequences that could give rise to new antibiotics, to mapping biodiversity, to uncovering hidden bits of basic biology, AI is accelerating scientific discovery, she said. But it’s not all beneficial. “All of these applications can have unintended consequences, and we need computer scientists to work with multiple stakeholders — from doctors and ethicists…to security experts and more — to develop and deploy [AI] responsibly,” she said. “Initiatives like RAISE Health show that we’re committed to this.”
The alignment of Stanford Medicine’s three entities — the School of Medicine, Stanford Health Care and Stanford Medicine Children’s Health — and its connection to the rest of Stanford University puts it in a unique position as experts grapple with AI development, governance and integration in health and medicine, Minor said.
“We’re ideally suited to be a pioneer in advancing and deploying AI in responsible ways, covering the gamut from fundamental biological discovery, enhancing drug development, making clinical trial processes more efficient, all the way through the actual delivery of health care and the way we run our health care delivery system,” he said.
What ethical integration looks like
Some speakers underscored a simple concept: Focus on the user — in this case, the patient or the physician — and all else will follow. “It’s putting patients at the center of everything that we do,” said Lisa Lehmann, MD, PhD, director of bioethics at Brigham and Women’s Hospital. “We need to be thinking about their needs and priorities.”
Speakers on one panel — which included Lehmann, Stanford Medicine bioethicist Mildred Cho, PhD, and Michael Howell, MD, chief clinical officer at Google — pointed to the complex nature of a hospital system, highlighting the need to understand the purpose of any intervention before implementing it and to ensure that all systems developed are inclusive, with input from the populations it’s meant to help.
One key to that is transparency — being explicit about where the data used to train the algorithm came from, what the algorithm was originally intended for and whether future patient data will continue to help the algorithm learn, among other factors.
“Trying to predict ethical problems before they become consequential [means] finding a perfect sweet spot of knowing enough about the technology that you can make some ascertainment of it, but getting to it before [an issue] spreads further,” said Danton Char, MD, associate professor of pediatric anesthesiology, perioperative and pain medicine. One of the key steps, he said, is to identify all the stakeholders who could be impacted by a technology and take note of how they would want those questions answered for themselves.
Jesse Ehrenfeld, MD, president of the American Medical Association, discussed four drivers of adoption for any digital health tool, including those powered by AI. Does it work? Does it work in my institution? Who pays for it? Who is liable?
Michael Pfeffer, MD, chief information officer for Stanford Health Care, highlighted a recent example in which many of those questions were tested with care providers at Stanford Hospital. Clinicians were offered assistance from a large language model that drafts initial notes to patient inbox messages. While the drafts weren’t perfect, the clinicians who helped develop the technology reported that the model lightened their workload.
“There are three big things that we’ve been focusing on: safety, efficacy and inclusion. We’re physicians. We take this oath to ‘do no harm,’” said Nina Vasan, MD, clinical assistant professor of psychiatry and behavioral sciences, who joined a panel with Char and Pfeffer. “That needs to be the first way that we’re assessing any of these tools.”
2+2 𝒻t; 4
Nigam Shah, MBBS, PhD, professor of medicine and of biomedical data sciences, kicked off a discussion with a jarring statistic, although he gave the audience fair warning. “I speak in bullet points and numbers, and sometimes they tend to be very direct,” he said.
To Shah, the success of AI hinges on our ability to scale it. “Doing the science right for one model takes about 10 years, and if every one of the 123 fellowship and residency programs wanted to test and deploy one model at that level of rigor, with our current ways of organizing work and [testing] it at every one of our sites to make sure it works properly, it would be $138 billion,” Shah said. “We can’t afford it. So, we have to find a way to scale, and we have to scale doing good science. The skills for rigor reside in one place, and the skills for scale reside in another, and hence, we’re going to need these kinds of partnerships.”
The way to get there, according to a number of speakers at the symposium, is public-private partnership, such as that being modeled through the recent White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and the Coalition for Health AI, or CHAI.
“The public-private partnerships [with] the most potential are [between] academia, the private sector and the public sector,” said Laura Adams, a senior advisor at the National Academy of Medicine. The government can bring public credibility, academic medical centers can bring legitimacy, and the technical expertise and compute time can come from the private sector, she noted. “All of us are better than any one of us, and we’re recognizing…that we don’t have a prayer of reaching the potential of [AI] unless we understand how to interact with each other.”
Innovating in AI, filling gaps
AI is also making an impact in research, whether scientists are using it to probe the dogma of biology, predict new synthetic molecular sequences and structures to underpin emerging therapeutics, or even to help them summarize or write scientific papers, several speakers said.
“There’s an opportunity to see the unknown,” said Jessica Mega, MD, a cardiologist at Stanford Medicine and co-founder of Alphabet’s Verily. Mega pointed to hyperspectral imaging, which captures features of an image that are invisible to the human eye. The idea is to use AI to detect patterns, for example, in pathology slides, unseen by humans that are indicative of disease. “I encourage people to push for the unknown. I think everyone here knows someone who is suffering from a health condition that needs something beyond what we can offer today,” Mega said.
There was also a consensus among panelists that AI systems will provide new means of identifying and combating biased decision making, whether it’s made by humans or AI, and opportunities to figure out where that bias is coming from.
“Health is more than health care,” was a statement echoed by multiple panelists. The speakers stressed that researchers often overlook social determinants of health — such as socioeconomic status, ZIP codes, education level, and race and ethnicity — when they are collecting inclusive data and enrolling participants for studies. “AI is only going to be as good as the data that the models are trained on,” said Michelle Williams, ScD, a professor of epidemiology at Harvard University and a visiting professor of epidemiology and population health at Stanford Medicine. “If we are looking for improving health [and] decreasing disparities, we’re going to have to make sure that we are collecting high-quality data on human behaviors, as well as the social and physical environment.”
Natalie Pageler, MD, clinical professor of pediatrics and of medicine, shared that cancer data aggregates often exclude data from pregnant people, creating inherent biases in models and exacerbating an existing gap in health care.
As with any emerging technology, there are ways that AI can make things better and ways it can make things worse, said David Magnus, PhD, professor of pediatrics and of medicine. The risk, Magnus said, is that AI systems learn about inequitable health outcomes driven by social determinants of health and reinforce them through their outputs. “AI is a mirror that reflects the society that we’re in,” he said. “I’m hopeful that every time we get an opportunity to shine a light on a problem — hold up that mirror to ourselves — it will be a spur for things to get better.”
If you weren’t able to attend the RAISE Health symposium, recordings of the sessions can be found here.