05/25/94

CONTACT: Stanford University News Service (650) 723-2558

New model of way brain processes visual images verified

STANFORD -- A new and improved model for how the brain initially processes visual images has been tested successfully in monkeys.

In the May 27 issue of the journal Science, Stanford psychologist David J. Heeger and Matteo Carandini from New York University report that a relatively new theory, called the normalization model, accurately predicts how the brain cells in a monkey's primary visual cortex respond when the animal views a variety of light patterns.

The primary visual cortex is a critical part of the visual pathway. It is the portal through which all visual signals pass before spreading out to other parts of the brain for further processing. Determining the manner in which images are processed in the primary visual cortex is considered a necessary step in understanding the overall process of vision.

Not only does the normalization model provide a better explanation for the response properties of neurons in the primary visual cortex, but it may also explain the appearance of certain optical illusions, Heeger said. In addition, it may have a practical application in suggesting ways to compress video images with minimal degradation.

For the last 30 years, scientists have relied upon a theory, called the linear computation model, that assumes the neurons in this part of the brain act something like independent calculators, adding and subtracting the neural signals that they each receive.

The linear model was first proposed in 1962 by David Hubel and Torsten Wiesel at the Harvard Medical School. The model is simple, powerful and has been extremely successful. As a result of their work, the two scientists were awarded the Nobel prize in 1981.

In the last 15 years, however, scientists have reported a number of experimental results that the linear model has difficulty explaining. As a result, over the last five years Heeger and his colleagues have developed a modified version of this model that maintains much of its simplicity and power, but also successfully explains many of the observations that conflict with the original theory.

"We were left with two alternatives: dump the whole thing or try to fix it. I've been trying to fix it," Heeger said.

The modification Heeger proposes is conceptually simple, but has complex ramifications. In addition to adding and subtracting the neural signals that it receives, he proposes that each neuron's activity level is partially suppressed by the pooled activity of a large number of other cortical neurons. Using the calculator analogy, each neuron's response is divided by a quantity proportional to the activity of surrounding neurons. The overall effect of this division is that the response of each neuron is rescaled, or normalized, with respect to the variation in the light intensities in the retinal image.

"Neurons have a limited dynamic range, and to operate properly they must stay within this range," Heeger said. "Normalization makes this much easier to do. Because dynamic range is a general problem in the brain, it would be logical for such a normalization process to be used in other brain systems."

The eye's ability to respond to a million-fold variation in light intensity, for example, might be due to an analogous mechanism, he suggests.

The new model also provides a possible explanation for the extensive lateral connections that exist between neurons in the primary visual cortex. In the linear model, there was no reason for these connections. But, in the normalization model, they are the means by which the neurons communicate information about their activity state with their neighbors.

The normalization process may also explain some of the subtle visual illusions that researchers have discovered. One such illusion is created by surrounding a pattern (for example, a plaid pattern) with a patterned background. When surrounded by a high-contrast background, the bright points of the central pattern appear dimmer, and simultaneously, its dark points appear lighter than when it is surrounded by a low-contrast background.

"According to the linear model, the contrast in the surrounding background area should not affect the appearance of the central pattern because the neurons act independently. In the normalization model, on the other hand, you expect such interactions," Heeger said.

In addition to improving our understanding of how vision works, the research has some possible practical applications, Heeger said. Currently, engineers are developing ways to compress video so that it can be transmitted efficiently over computer and cable TV networks. Much of this effort involves identifying information in the images that can be deleted imperceptibly. The normalization model might be used to predict which information can be removed with a minimal amount of perceptible degradation.

-dfs-

940525Arc4234.html


This is an archived release.

This release is not available in any other form. Images mentioned in this release are not available online.
Stanford News Service has an extensive library of images, some of which may be available to you online. Direct your request by EMail to newslibrary@stanford.edu.