Stanford University News Service



CONTACT: Stanford University News Service (650) 723-2558

Neural network model to help computers understand language

STANFORD -- If understanding spoken language is frequently hard for humans, how could a computer manage? To this notorious obstacle in developing "intelligent" computers, an approach using neural networks may provide some initial answers.

Stanford psychology professor David Rumelhart has devised a general model and small experimental cases for how a neural network can reach an interpretation from a spoken or written string of words.

One of many possible applications of an interpreting network is speech recognition.

"People dream about machines that are able to understand language, but the quality today is quite poor," he said. "Our networks behave interestingly, although the work is still conceptual."

Rumelhart will present his ideas at the annual meeting of the American Association for the Advancement of Science, to be held Feb. 11- 16 in Boston. He will deliver a talk in Friday afternoon's session on "Mathematics: Concepts and Computations."

The model to account for computer comprehension of language is based on "constraint satisfaction," a mechanism in which the neural network attempts to find the most coherent interpretation of a given sentence within a given set of constraints.

That simulates the human strategy: Since sentences usually have many potential meanings, we automatically check possible interpretations against our knowledge of the world to find the account that is consistent with both what we know and what we hear.

"It turned out that some neural networks are good at very quickly satisfying a whole set of constraints simultaneously," Rumelhart said.

The constraints can come from various sources - the words the computer hears, or linguistic rules, such as what type of word has to follow a certain other kind of word. Also included is information about reasonable inferences that can be made about specific combinations of words.

Strong constraints must be met; the network will rule out an interpretation that violates certain grammatical relationships between words.

Weak constraints are associated with words that tend to have one meaning but have another meaning in a different context. They require the neural network to evaluate more possibilities. If a solution equally satisfies all other constraints, the network will choose the most common meaning for that word but if not, it may have to assign it an unusual meaning in order to be coherent.

The neural network reaches the best or, in terms of the network, the most harmonious solution with a built-in feature of the network, called "hillclimbing."

The network locally modifies the activity of a unit - the computer equivalent of a neuron - and that unit's immediate neighbors to increase the overall harmony of the system. That reduces the complexity of the computation process dramatically, because "all the computer has to worry about regarding any one unit are the units immediately connected to it," said Rumelhart.

The greater challenge is to figure out how to express what the units mean and therefore, how to implement the constraints. The strategies for building large structures in networks are exceedingly complex, according to Rumelhart.

"People know so much that building the structure is really hard, especially representing syntactic categories and all the meanings at the same time," he said.

In experimental cases, Rumelhart uses language models to help anticipate the possible words the neural network reads or hears. It looks at many sentences and learns about their structure, then it predicts whether next to expect a verb or a noun, for example.

It chooses among clusters of possible next words and predicts by considering the context and the grammatical structure up to that point.

Language models may then be combined with a speech recognizer or a handwriting recognizer to attempt reading and interpreting at the same time.

To aid speech recognition, Rumelhart has produced network learning procedures for recognizing the often blurred borders between individual units of speech. He also has developed an algorithm to segment and recognize information simultaneously.

Moreover, he uses a dictionary for the network to look up words. In its current form, this dictionary operates as a discrete program, but "ultimately, it should be a function of a homogeneous network system," he said.


(This story was written by Gabrielle Strobel, a science writing intern at the Stanford News Service.)


This is an archived release.

This release is not available in any other form. Images mentioned in this release are not available online.
Stanford News Service has an extensive library of images, some of which may be available to you online. Direct your request by EMail to

© Stanford University. All Rights Reserved. Stanford, CA 94305. (650) 723-2300.