As artificial intelligence (AI) continues to reshape education, research, and administration at Stanford, a new report from the university’s AI at Stanford Advisory Committee calls for balancing innovation with responsibility and alignment with the university’s key values.
Underscoring both the opportunities and risks of AI, the “AI at Stanford” report sets forth guiding principles to encourage experimentation and creativity while addressing challenges such as plagiarism, authorship, and ethical use.
“The growth of AI technologies has huge implications for higher education, from the classroom to the research lab,” said Provost Jenny Martinez. “The possibilities are incredibly exciting, and I’m confident Stanford will continue to be a leader in this area. As we support advances in this technology, it’s crucial to assess how AI is being used at Stanford today, consider how it may be used in the future, and identify any policy gaps that may exist. I’m grateful for the advisory committee’s thoughtful and thorough work, and the guidance it has provided in advancing the responsible use of AI at Stanford.”
In March, the provost charged the AI at Stanford Advisory Committee to evaluate the role of AI in administration, education, and research, and to identify what’s needed to use AI responsibly at the university.
Various university offices and committees will consider and take on recommendations from the report, and the committee chair will provide a presentation to the Faculty Senate in the winter quarter. The committee, made up of 10 faculty and staff members from various campus units, will continue to meet to address future issues related to AI use at Stanford.
Balancing experimentation with principles
While the committee acknowledged legitimate concerns about AI, it sought to avoid rigid policies that might deter the technology’s potential benefits. “We wanted to first encourage experimentation in safe spaces to learn what it can do and how it might help us pursue our mission,” said Committee Chair Russ Altman. “But then we also wanted to establish clear ‘hot button’ areas where people should proceed with an abundance of caution.”
The report highlights potential policy gaps for the university, noting that many situations and challenges can’t be anticipated, and provides general principles to guide AI usage at the university, such as requiring human oversight and ethical considerations in all AI usage.
“There should always be professionalism and personal responsibility. Whenever somebody uses AI, even if the AI does some work, they need to take responsibility for the output, and if there’s mistakes, it’s on them,” said Altman, the Kenneth Fong Professor and professor of bioengineering, of genetics, of medicine, of biomedical data science, and senior fellow at the Stanford Institute for Human-Centered Artificial Intelligence.
People should resist lapsing into AI exceptionalism – assuming that existing laws, regulations, and university policies aren’t applicable to AI, according to the report. “It seems seductively capable,” Altman said, “and so people let their guard down and use it in ways that suggest they are forgetting what it is and how it works.”
As a guiding principle, the report recommends an “AI golden rule”: use AI with others as you would want them to use AI with you. For example, would you want AI to be used to review your proposal? This assessment would be based on individual judgments, evolving community norms, and combined with other principles to inform AI use.
Education
One primary area the committee examined is how AI affects education. Students have already adopted AI technologies such as ChatGPT, meaning the Honor Code and individual classroom policies may need to be revisited, the committee noted.
At the same time, many faculty aren’t experienced with AI and are unaware of how they can use it, said Dan Schwartz, committee member and dean of the Graduate School of Education (GSE).
“Students have been exploring it much more than the faculty, and that’s why it’s important to find ways to educate the faculty, which I think is also true on the research side,” Schwartz said. “The question is slowly changing from what are our students going to cheat with, to a more sophisticated question of what counts as cheating, how they can use AI in their work, and how to make it part of the assignment.”
To help faculty navigate these questions, the committee recommends frameworks that can be tailored to different classroom needs. “It was something that students as much as faculty wanted so they can understand what’s permissible and productive,” Schwartz said.
The GSE has already created the AI Tinkery, part of the Stanford Accelerator for Learning, to provide a collaborative space for educators to explore the possibilities of AI.
Research
In research, AI raises complex challenges, including the question of whether AI can or should be credited as an author on publications. Other concerns include the use of AI in reviewing and writing proposals, training AI on student work, using data for AI research, addressing potential copyright violations in LLM outputs, and detecting fraudulent behavior.
For example, the use of AI detectors is already leading to a higher volume of plagiarism allegations, including spurious ones, which creates huge burdens on the offices that handle research misconduct. The university’s misconduct policy and rules for investigating allegations may also need to be updated in accordance with new federal policy.
Many of these issues emerged from conversations with those who work in research administration, Altman said.
Researchers should also be reminded or made aware of potential risks in AI use and updated as legal risk profile changes, the report says. The committee also encourages the university to consider ways to expand computing resources to ensure Stanford remains a leader in the productive use of these technologies.
Administration
Regarding AI use in university administration processes, the committee found several areas that may require more guidance and additional policies, such as hiring, performance reviews, admissions, communications, and surveillance. The report also included recommendations on education and training for use of sensitive data, a streamlined procurement process for AI systems, and letters of recommendation.
The committee acknowledges that not all AI uses have been fully identified. “We were very aware that we couldn’t uncover all uses of AI on campus, and this is what led us to articulate the guiding principles that we hope are useful for folks evaluating new AI opportunities to see if there are any red flags,” Altman said. “In the end, these principles are probably more useful, and general purpose, than the specific issues we surfaced, which may or may not remain issues over the next few years.”
The committee is just one facet of the university’s ongoing incorporation of AI into its everyday work, which also includes University IT resources like the Stanford AI Playground where staff, students, and faculty can access a range of AI tools. At the same time, on the research side, faculty across the university are pushing the frontiers of knowledge in the development and use of AI across a wide variety of disciplines, supported by programs like the Stanford Institute for Human-Centered Artificial Intelligence and Stanford Data Science as well as the new Marlowe, Stanford’s newest high-performance computing cluster.