Recent advances in easy-to-access artificial intelligence tools have brought with them a quandary about AI’s place in education. Discussions about if and how generative AI should be used in classrooms are going strong. In the meantime, the tools are widely available to students and teachers alike.
To keep pace with change – and to help chart productive paths forward for AI use in the classroom – educators at Stanford are testing different ways to apply generative AI in coursework and sharing their ideas with others.
The Office of the Vice Provost for Undergraduate Education (VPUE) and the Center for Teaching and Learning (CTL) have launched a new effort called AI Meets Education at Stanford (AIMES), which provides a window into how courses across disciplines are leveraging generative AI tools for learning, as well as how they are placing important constraints on students’ uses of AI to foster deep learning and critical thinking.
“As AI evolves, faculty are changing class policies and assignments,” said James T. Hamilton, the Freeman-Thornton Vice Provost for Undergraduate Education. “AIMES makes it easier for faculty to share ideas and approaches to encouraging or restricting AI use in their courses.”
VPUE also recently welcomed Michele Elam as one of two new senior associate vice provosts for undergraduate education. Elam, the William Robertson Coe Professor in the School of Humanities and Sciences (H&S) and a senior fellow at the Stanford Institute for Human-Centered AI, will co-lead the AIMES initiative along with CTL’s associate vice provost for education and director, Cassandra Volpe Horii.
“As our teaching and learning community considers whether, how, and when to use AI in courses, it is essential to do so within the broader context of Stanford’s enduring mission prioritizing open inquiry and ethical citizenship,” said Elam. “We also recognize that AIMES’s work occurs within the context of often wildly conflicting information about the possibilities, limitations, and harms of artificial intelligence. Therefore, one of our new initiatives includes helping the campus community navigate and evaluate all this incoming by becoming a resource for the latest, most thoroughly vetted research and scholarship on AI and education – leveraging especially the expertise on campus – to help educators and students make the most critically informed, evidence-driven decisions and choices about AI.”
As our teaching and learning community considers whether, how, and when to use AI in courses, it is essential to do so within the broader context of Stanford’s enduring mission prioritizing open inquiry and ethical citizenship.Michele ElamAIMES Co-Leader
Below are four examples from AIMES that illustrate how instructors are bringing AI into the classroom. A library of these examples is available on the CTL website, along with synthesized insights and professional development resources on university teaching and AI.
- Program in Writing and Rhetoric (PWR) courses provide some of the earliest practice and feedback on thinking-through-writing that students encounter at Stanford. The PWR lecturers are taking a variety of approaches to AI.
For example, Shay Brawn, an advanced lecturer, teaches the PWR 1 course Rhetoric of Robots and AI, which grapples with AI topics. Brawn provides clear guidance about allowed and disallowed uses of AI. While she acknowledges that generative AI might have valid uses in other contexts, her policy is that students may not use any writing generated by an AI for class assignments and may not use AI summaries as a replacement for engaging directly with actual sources. Students may, however, use AI to locate sources, identify key concepts to aid in research, and correct grammar, but Brawn warns about AI hallucinations. She is also clear that AI use should be transparently cited. Brawn separates AI use that is not permitted in her class into uses that run counter to the learning goals of the course and uses that are unethical, giving students more to consider about these categories. Brawn credits the work of Lisa Swan and Valerie Kinsey, who also teach in PWR, for earlier course policies that she adapted. - Art practice instructor Morehshin Allahyari, assistant professor of art and art history in H&S, allows the use of AI as a tool for generating ideas or visual sketches for projects but not for replacing the critical thinking, communication, and research skills needed to complete assignments. She also co-creates an agreement on the use of AI tools with her students, within boundaries. Allahyari emphasizes students’ effort and learning throughout their process, rather than just the final product of their work.
- Philosophy 20N, Philosophy of Artificial Intelligence, is a seminar that grapples with questions at the core of the field of AI through the perspectives of philosophy of mind, epistemology (the philosophy of knowledge), and ethics. In this course, John Etchemendy, professor of philosophy in H&S, has students write regular journal entries about this topic and a longer final paper. In an intro note to students, Etchemendy urges students not to use LLMs to help write their work for the course: “By the end of the class, the three of us [faculty and teaching assistants] will come to know each of you well enough to recognize whether your paper reflects the way you think and express yourself. Believe me, ChatGPT isn’t going to capture your distinctive voice and way of thinking.”
- In the Computer Science course CS 121, Equity and Governance for Artificial Intelligence, senior lecturer Cynthia Bailey has students take on authentic policy-making tasks, such as making legislative recommendations about AI from the perspective of a staffer advising a U.S. Congress member or state legislator. The assignment aims to “evoke the reality of being a congressional staffer, to foster intrinsic motivation to do the assignment well,” explained Bailey. Congressional staffers must digest, understand, and recall large volumes of information at a blistering pace. To do that work, they use web searches, Wikipedia, and, in some cases, generative AI. Likewise, Bailey’s students are allowed limited use of generative AI for this project, such as to help them analyze detailed legislative and technical material in a short amount of time. In the course of this work, students are held to exceptionally high standards of integrity, excellence, and discretion – as though they were real-world congressional staff – and must disclose their use of AI.
For more information
Elam is also a professor of English in the School of Humanities and Sciences (H&S), a race and technology affiliate at the Center for Comparative Studies in Race and Ethnicity, and an affiliate of the Clayman Institute for Gender Research and the Wu Tsai Neurosciences Institute. Etchemendy is also the Patrick Suppes Family Professor in H&S, the Denning Co-Director of Stanford HAI, and provost emeritus. Hamilton is also the Hearst Professor and professor of communication in H&S, and a senior fellow at the Stanford Institute for Economic Policy Research (SIEPR).
Writer
Taylor Kubota
