The promises and challenges posed by an increasingly digital world dominated the discourse at Stanford Medicine’s Center for Digital Health symposium Oct. 29: Speakers explored how human interaction with AI-powered chatbots changes social dynamics, the need to build trust in science and emerging technologies, and what it will take to ensure that artificial intelligence and digital health technologies flourish around the world — not just in high-income populations.
“Digital health, as many of you know, is not just about optimizing the health of patients here and now. It’s also about redefining how we approach health care in the future,” said Eleni Linos, MD, DrPH, director of the center.
“It’s difficult to remember a time when these technologies were not a part of the health care landscape,” noted Lloyd Minor, MD, dean of the School of Medicine and vice president for medical affairs at Stanford University, who joined via video screen. “Emerging technologies such as generative AI promise to transform patient care, medical education and biomedical research in profound ways. Our responsibility is to figure out how to do this equitably and safely.”
David Entwistle, president and CEO of Stanford Health Care, emphasized the importance of being digitally driven, one of the core pillars of Stanford Medicine’s integrated strategic plan, which binds the missions of Stanford Health Care and the School of Medicine. Entwistle spoke of the ample opportunity, given Stanford Medicine’s strong ties to Silicon Valley, to leverage cutting-edge technology and bring new health care innovations to fruition in the clinic.
“We have to do that thoughtfully, in a way that actually will have a meaningful purpose,” he said. That means applying new technologies so that “they will make a difference in our patients, in our clinicians and [result in] better outcomes.”
Building trust in science and AI
Panelists spoke about building and sustaining trust — in science, in new technologies and in the people who power them. “We’re fortunate in medicine that we are a regulated industry,” said Curt Langlotz, PhD, professor of radiology, of medicine and of biomedical data science. “We have the [Food and Drug Administration], which is looking out for the safety of the products that we’re building and implementing. The FDA has had a fairly good balance between safety…and innovation.” The thing that’s missing, he said, is transparency.
There isn’t enough information about the data used to train a given algorithm or whether that data is similar enough to other patients and health care settings to perform successfully outside of its particular context, he said. “We have a lot of potential customers of AI out in the marketplace who are asking the question, ‘How do I know if this product would work for me in my practice?’ I don’t think we have enough information about that.”
Nigam Shah, MBBS, PhD, professor of medicine and of biomedical data science, cautioned that the field needs to approach AI with care. “The mindset has to be about improving medical care and health care,” Shah said. “What happens right now is that our field tends to get caught up in the hype, and we want a revolution every three minutes. But medicine doesn’t change that fast, and we have to accept the natural rhythm of our field and adapt this incoming technology so that it works in our service instead of dragging us into distraction.”
Sometimes, building trust comes down to how you tap into those you’re trying to reach. “People naturally understand things through storytelling, and you can tell a story in a way that imparts insight, that gives you an aha moment,” said Dana Cho, vice president of design at Pinterest. She gives an example from her own life, when her daughter told her about an all-plastic island the size of Texas that’s floating in the middle of the Pacific Ocean. “After that vivid illustration, I can’t use Ziploc bags. I just can’t anymore.”
Other storytelling techniques can build empathy and help audiences or end users reach moments of insight or behavior change, she added. After the pandemic, Cho noted, the discourse about science devolved into an oversimplistic dichotomy: “‘There are people who believe in science, and there are people who don’t believe in science.’ I really think that shows a lack of imagination.” Instead, she suggested, “Maybe we haven’t illustrated it in a way that is really impactful and meaningful to people’s lives, their personal narratives and their identities.”
People first, then technology
During a panel on global health in the AI era, Maya Adam, MD, PhD, an associate director of the Center for Digital Health, brought up a conundrum on the minds of technologists and ethicists alike. Large portions of the global population don’t have access to robust education and health care systems or advanced digital technologies. So, when new technologies like AI are brought into the mix, “Is there a risk that this kind of innovation could aggravate what we call the digital divide?” she posed.
It depends on how the technologies are designed, the panelists said. Take telehealth, said CK Cheruvettolil, former senior strategy officer of digital health and AI at the Bill & Melinda Gates Foundation. If a patient sees a doctor through a telehealth system, then sends a prescription to the local pharmacy, the success of that visit depends on having access to a pharmacy and the pharmacy having the staff and resources necessary to fill the order. That might work in the United States, Cheruvettolil said, but it’s not the norm in other parts of the world.
Michelle Williams, ScD, professor of epidemiology and of public policy at Harvard University, commented on the root of that problem. “Within the public health space, we have to deal with the legacy of innovation being designed for others and not for those populations that are most in need of innovation of health care [and] health surveillance,” she said. “I worry that we are not centering our innovation on the populations that need help, so we inadvertently open this opportunity for disparity to grow.” The solution, the panelists largely agreed, is not to pull back on innovation, but to redesign the technologies to meet the needs of specific populations in a way that integrates into the culture, available resources and environmental contexts.
Collaboration is key to innovation
“The rapid pace of technological change in health care is such that no single individual, no single organization or sector, can address these complexities alone. Our work thrives when we work together, drawing on the strengths of our varied expertise,” said Linos, who is a professor of dermatology and of medicine. That interdisciplinary collaboration is more important than ever, said Jessica Mega, MD, co-founder of Verily Life Sciences, who shared that this year, about 1 billion people will interact with some aspect of digital health, be it through a patient portal, a digital health monitoring tool, telehealth technology or something else.
Emerging technology development will always benefit from an array of diverse perspectives. That might mean creating cross-industry interdisciplinary collaborations, or, in the case of Stanford University President Jonathan Levin and Fei-Fei Li, PhD, professor of computer science and co-director of the Stanford Institute for Human-Centered AI, a chat over a cup of tea.
When Li first met Levin, who at the time was the dean of the Stanford Graduate School of Business, the two had swapped ideas about the future of AI. “In 2017 or so, Fei-Fei came over one night, and she had this idea to create an interdisciplinary center at Stanford around AI, which would eventually become HAI,” recalled Levin, referencing the Institute for Human-centered Artificial Intelligence. Part of her vision was to ensure that the institute wasn’t set up just for computer scientists, but for all the people working on the technology — those conducting the medical research as well as clinicians, ethicists, environmentalists, economists and more. “She somehow saw all of that almost 10 years ago, which was really extraordinary,” Levin said.
People often ask why academia still needs to be a core part of AI development, Li said. “I was asked this question in the White House, in front of Congress.” Her answer, she said, will always be the same: “Because of public good. Universities and public sectors create public good that’s so important for the advancement of our civilization. For the decades and hundreds of years to come, it’s our responsibility to create [a] public good for curiosity, for knowledge [and] for the sake of truth.”
Media Contacts
Hanae Armitage
Tel 650-725-5376
harmitag@stanford.edu