Stanford policy lab explores government use of artificial intelligence

Students contribute to a report on how AI is being used in governmental agencies and where it might go in the future.

Federal administrative agencies across the United States employ machine learning and artificial intelligence to make decisions. But what happens when agencies can’t explain how those algorithms work? Students in a policy lab at Stanford, Administering by Algorithm: Artificial Intelligence in the Regulatory State, are exploring this question and what it means for the future when law and computers intersect.

Internet law concept with 3D rendering on computer monitor displaying a judge's gavel

An interdisciplinary policy lab harnesses Stanford’s unique mix of legal and technical expertise. (Image credit: Getty Images)

Stanford co-instructors David Engstrom, Daniel Ho and California Supreme Court Justice Mariano-Florentino Cuéllar – along with Professor Catherine Sharkey of New York University School of Law – have brought together 25 burgeoning lawyers, computer scientists and engineers to probe the technologies government agencies develop and deploy. The lab culminates in a report that will be submitted to the Administrative Conference of the United States (ACUS), which puts forward guidelines outlining how such government agencies should operate.

“We want to understand what is happening now and we also want to get inside agencies and really understand what might be coming down the pike in the next five or 10 years,” said Engstrom, a professor of law.

Combining law and technology

Engstrom sees the lab as a model for a new type of interdisciplinary work that harnesses Stanford’s unique mix of legal and technical expertise. As artificial intelligence and machine learning become more sophisticated, laws will need to adapt to accommodate for developing technology. But in some cases, federal agencies can’t understand the “black box systems” they implement to do government work, from allocating benefits to prosecuting violations. Computer scientists themselves may not fully comprehend why AI makes the decisions it does.

“We have a collision between a body of law that says we want agencies to explain why they’re doing what they’re doing and agencies using tools that, by their very structure, are not fully explainable,” Engstrom explained.

The course evolved as a way of addressing this clash.

“Some of the most interesting conversations have required both a technical grasp and a legal understanding of a problem,” said Ho, the William Benjamin Scott and Luna M. Scott Professor of Law. “Observing that conversation play out between the students is really rewarding.”

Working together

The lab is divided into teams – each a mix of law and computer science students – who were given two tasks. First, the teams fanned out to probe the 100 most important federal administrative agencies, including the Environmental Protection Agency, Social Security Administration, and Securities and Exchange Commission. When they found examples of algorithms involved in decision-making, the students worked together to evaluate the technology and judge what category it fell into: Was it AI, machine learning or something far more basic?

Next, they engaged with the agencies themselves to examine specific applications and understand where the technology might be headed. Students brainstormed how new technological advances might intersect with the law – and how to navigate those collisions. Their results, Engstrom said, will be written in the ACUS report, which they hope will influence future policies governing agencies.

“It is the most collaborative and interdisciplinary class that I’ve been in at Stanford,” said Cristina Ceballos, a third-year law student and PhD student in philosophy.

She added that without the CS students on her team, she wouldn’t know what questions to ask when speaking with agency representatives. “I think that if you are going to regulate how agencies are going to use AI, you have to have some sense of what the AI is actually doing,” she said.

Urvashi Khandelwal, a fourth-year computer science PhD student, said it’s important for people in her field to explore how to deploy AI and machine learning in the real world. “I’ve heard a lot about what machine learning researchers are talking about,” she said, “but I did not have much perspective on the legal side or the policy side.”

Real-life impacts

Engstrom and Ho expect that their students will finish the course with a deeper appreciation of how interdisciplinary work leads to better solutions.

Derin McLeod, a second-year law student, said that he appreciates the value of having two disciplines together in the same room when thinking about complex issues. “Going back and forth tracks the challenges that we are trying to grapple with,” McLeod said. “It’s not just a technical problem of one kind or another, it’s explaining it to other audiences.”

For CS students, Engstrom hopes they will have a “greater sense of the promise and peril of the tools they develop.”

Indeed, Sandhini Agarwal, a senior majoring in symbolic systems and the only undergraduate in the class, recognizes that developing AI and machine learning could have significant consequences. “I’m learning how to ground some of the ideas we build in CS classes and seeing, when they are actually being used in the real world, what are some challenges that we face,” Agarwal said.

“The coolest thing about the class is the back and forth between the CS and law students,” Agarwal added. “I’m excited for more collaborations to take place.”