Stanford University Home

Stanford News Archive

Stanford Report, March 17, 1999

Computer pioneer discusses atheism, artificial intelligence


After he was introduced, John McCarthy glanced up briefly at the stained glass windows in the side chapel of Memorial Church and then down at a single sheet of folded paper that he held in his hands.

"Considering where this is, I have to express my attitude toward [religion]," said the Charles M. Pigott Professor of Computer Science and one of the fathers of the field of artificial intelligence. "To count as an atheist, one needn't claim to have proof that there are no gods. One only needs to believe that the evidence on the god question is in a similar state to the evidence on the werewolf question. So I am an atheist."

Related Information:

The eminent computer scientist found himself in this clearly unfamiliar setting March 10 as the invited speaker at "What Matters to Me and Why," a biweekly series that provides a forum for Stanford educators to discuss the formative experiences, values, and religious or philosophical convictions that inform their work.

McCarthy described himself as a "tolerant atheist," who upon hearing of a group in Alabama that was trying to prevent the 10 Commandments from being removed from a school wall sent the organizers $100 because he thought they were being bullied.

Citing the well-known quote by J. B. S. Haldane ­ "Now, my own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose" ­ McCarthy said that he views the basic nature of things as a puzzle. He said that he subscribes to an extreme interpretation of quantum mechanics that describes the universe as made up of multiple possible worlds.

"The universe itself has no purpose. Purposes are constructed by human beings," he said.

When he considers what matters to him, McCarthy said that he cares about a lot of specific things that don't seem particularly related, that don't follow a single general principle. Logical artificial intelligence means a great deal to him ­ "after all, I've been thinking and working on it for 40 years" ­ but it doesn't have much to do with another of his interests, which is the ultimate destiny of mankind.

Despite the progress that has been made in AI since he coined the term "artificial intelligence" in the 1950s, the research has not brought the field within development range of simulating human intelligence, which is its ultimate goal, he said. "We still need new basic ideas. So it may take five years, and it may take 500 years. The understanding of intelligence is a hard scientific problem."

Nevertheless, the computer scientist remains confident that creating artificial intelligence is an achievable goal. This confidence, he acknowledged, is rooted in his materialist worldview. "Human intelligence is carried out by the human brain. If one material system can exhibit intelligence, why can't another?" he said.

Several years ago, McCarthy got interested in the ultimate fate of humanity: how humanity will make out over the long term. "I would find it depressing
if . . . we were going through a period of temporary prosperity, but, in the long term, people are doomed to be poor [due to the lack of some basic resource]," he said.

Fortunately for his frame of mind, McCarthy has concluded that human society has access to adequate supplies of energy, food and some 20 other basic necessities to support a total world population of about 15 billion, a figure that it is unlikely to top if current world demographic trends continue. "I haven't done any original work on this. I've just collected information, which I have posted on a lot of web pages," he said.

When asked why the fate of humanity mattered to him, McCarthy responded, "Why do I need an added justification? I was raised to be concerned about humanity. Of course, I was also raised as a communist, but I rejected that."

In response to the question of how he reconciled free will and artificial intelligence, McCarthy answered, "I believe in free will, even for robots!" Suppose you build a robot, he said. What attitude would you program into the robot regarding free will? If you programmed it to say, "I have no free will!" then the program would turn out to be unsuccessful. SR