02/10/93

CONTACT: Stanford University News Service (650) 723-2558

Neural networks stumble across same mistakes as children in learning language.

STANFORD -- Using a neural network to model how young children learn the past tense form, Stanford psychology Professor David Rumelhart challenged a standard theory on language acquisition that involved linguistic rules.

The learning behavior of the neural network closely mimicked the one observed in children, although the network had never been exposed to such rules.

"We were able to produce patterns of errors in our model that we had seen in children, just by training the network," Rumelhart said.

He will present his findings at the annual meeting of the American Association for the Advancement of Science in Boston Friday, Feb. 12.

Rumelhart taught the network solely by offering it examples of verbs and their past tense forms. Then the network predicted the past tense of new verbs.

As the network learned, it said, for example, go-goed or put- putted -- theoretically correct past tense forms that are wrong because English is an inconsistent language.

This type of mistake is called "overregularization" and was long known from children. They apply past tense suffixes of regular verbs to irregular verbs. Children later learn the correct forms, although some errors, like digged instead of dug, can persist until high school, according to Rumelhart.

To emulate the initial learning process of children in the neural network, the researchers first looked at children's vocabulary and found that they begin by learning only a few words, naturally the most frequent ones. These, however, turned out to be the most idiosyncratic as well, for example go-went, be-was, make-made.

The researchers showed that as long as a child was learning only the frequent words, it did not overregularize because there was no pattern yet. Look-looked was no more typical than go-went. The regular type became preponderant as the child learned more verbs, and it started classifying those into clusters.

That is when the child starts making overregularization mistakes. The network behaves exactly the same.

The network learns by strengthening the connections between its units activated by the frequently occurring verbs, such as regular ones. It concludes that past tense is made by adding -ed and when faced with a new verb, predicts a regular past tense form.

Since some ing-containing verbs, like sing or ring also occur frequently, the neural network captures the new word bring in the same cluster of words and generalizes: It says bring-brang, just as sing-sang.

As the network learns more, it gradually distinguishes between the exceptions and the regular words.

"Eventually, the network was able to get it right again," Rumelhart said, "and this pattern was very similar to that of young children."

The result challenged the standard theory that children learn by going through a series of stages. According to that theory, the child memorizes some examples in the first stage. In the second stage, it learns the rule and begins overapplying it, until finally, a child realizes it shouldn't use the rule everywhere and memorizes the exceptions.

"Some colleagues think our [neural network] model is wrong because it does not invoke the concept of rules," Rumelhart said.

He added that there is no need for mentioning rules. The learning process itself, without different stages, yields the sequential behaviors automatically.

Through experience, the child gradually arrives at doing what the rules say it should do without the rules playing any role in the system. Without knowing them, the child learns the right response through generalizing from one example to another.

In reality, Rumelhart suggests, a child learns mostly by listening rather than by being taught rules.

"In practical fact, parents rarely correct their child and introduce a rule. Up until adulthood, we often don't know linguistic rules but follow them anyway."

-jns-

This story was written by Gabrielle Strobel, a science writing intern at the Stanford News Service.

lid, pw, ban, bab, lan, sci 1-7 aaas-neural2

930210Arc3403.html


This is an archived release.

This release is not available in any other form. Images mentioned in this release are not available online.
Stanford News Service has an extensive library of images, some of which may be available to you online. Direct your request by EMail to newslibrary@stanford.edu.