How a driveway chat resulted in the Institute for Human-Centered Artificial Intelligence
Stanford’s new Institute for Human-Centered Artificial Intelligence (HAI) has lofty ambitions. It aims to fundamentally change the field of AI by integrating a wide range of disciplines and prioritizing true diversity of thought.
But it turns out that the origins of the institute are somewhat more humble. As institute leaders JOHN ETCHEMENDY and FEI-FEI LI explain in a recent article, it all started with a conversation in Li’s driveway. Etchemendy, ending his term as Stanford provost, bought a house adjoining Li’s back yard. After he and his family moved in, Etchemendy replaced the old fence between the two homes. In one spot, connecting the Li back yard to the Etchemendy side yard, he built a gate. Their subsequent collaboration has led to a “well-trodden path” between their two homes.
Here’s how they explain the institute’s genesis on the HAI website in a piece they call “Opening the Gate.”
It was the summer of 2016.
“John,” [Fei-Fei] said, “As Stanford’s provost, you’ve led an effort to draw an arrow from technology to the humanities, to help humanists innovate their methodology. It’s time to build another arrow coming back the other direction. It should become a complete feedback loop. We need to bring the humanities and social thinking into tech.”
She went on to explain an epiphany she had recently had — a problem she could no longer ignore. The people building the future all seemed to come from similar backgrounds: math, computer science and engineering. There were not enough philosophers, historians or behavioral scientists influencing new technology. There were very few women or people from underrepresented groups.
“The way we educate and promote technology is not inspiring to enough people. So much of the discussion about AI is focused narrowly around engineering and algorithms,” she said. “We need a broader discussion: something deeper, something linked to our collective future. And even more importantly, that broader discussion and mindset will bring us a much more human-centered technology to make life better for everyone.”
Standing in Fei-Fei’s driveway, John saw the vision clearly. As a mathematical logician, he had been actively following the progress of AI for decades; as a philosopher, he understood the importance of the humanities as a guide to what we create. It was obvious that not only would AI be foundational to the future — its development was suddenly, drastically accelerating.
If guided properly, AI could have a profound, positive impact on people’s lives. It could help mitigate the effects of climate change; aid in the prevention and early detection of disease; make it possible to deliver quality medical care to more people; help us find ways to provide better access to clean water and healthy food; contribute to the development of personalized education; help billions of people out of poverty; and help solve many other challenges we face as a society.
But AI could also exacerbate existing problems, such as income inequality and systemic bias. In the past couple of years, the tech industry has struggled through a dark time. Multiple companies violated the trust and privacy of their customers, communities and employees. Others released products into the world that were not properly tested for safety. Some applications of AI turned out to be biased against women and people of color. Still more led to other harmful unintended consequences. Some hoped the technology would replace human workers, not seeing the opportunity to augment them.
That day began a conversation that continued over many months. We discovered that we both had been on a similar quest throughout our careers: to discover how the mind works — Fei-Fei from the perspective of cognitive science and AI, and John from the perspective of philosophy.
Read more of their story on the website of the Institute for Human-Centered Artificial Intelligence website.