Stanford neuroscientists find that noisy neurons are critical for learning
A computer model of brain function helps explain a 20-year-old finding that the way a single noisy neuron fires in the brain can predict an animal's decisions. It turns out neurons without noise can't learn. The type of learning the group modeled reflects the way we learn to categorize food, music or favorite cafes.
Almost 20 years ago, Professor William Newsome, director of the Stanford Neurosciences Institute, stumbled on a surprising finding: Neurons in the brain have fluctuating, “noisy” signals, sometimes firing one way when faced with a certain stimuli and sometimes firing another. What’s more, how that single, somewhat variable neuron fires is then reflected in an animal’s decisions.
Now Stanford scientists have employed computer models to reveal the reason behind these “noisy neurons”: neurons that are entirely consistent can’t learn.
Postdoctoral scholar Tatiana Engel was first author on this work, which was published in last week’s Nature Communications with colleagues from Yale, New York University and the University of Chicago.
Engel said that Newsome’s original discovery was a surprise in part because the area where these noisy neurons reside – roughly above and behind the ear – is not a part of the brain normally associated with thoughtful decision-making.
“These cells are in the sensory system, so it’s not in the cortex where we would like to think the decisions are made,” said Engel, who did the work in the lab of Xiao-Jing Wang at Yale University. “This was exciting to me to realize that we are used to thinking about ourselves as agents who are in charge of our decisions and in charge of our thoughts, but the brain might be playing tricks with us.”
The original experimental studies involved animals trained to categorize dots moving in any direction on a computer screen into two categories: left and right. If the animals chose accurately they got a reward. This is a very simple way of studying the more complex categorization-learning that we experience in everyday life, like choosing foods or favorite music.
What the scientists found is that a group of neurons that detect general motion started out a little noisy, with some firing in response to moving dots a little more strongly when the animal chose one category over another and vice versa.
When the animals acted on those noisy neuronal signals and got a reward, the neurons became more finely tuned. Over time, the neurons that had a noisy decision bias became category detectors, firing strongly in response to dots from one category and much less for the other category. Neurons that began simply detecting motion had learned to detect left-ward and right-ward categories of motion.
Engel noted that people learn to detect categories in our everyday lives. Take coffee shops. If a person in a strange city walks into an unknown coffee shop with a blue sign, neurons begin firing off decisions about whether or not to go in. Eventually, after sipping the coffee and enjoying the décor, the neurons biased toward entering the coffee shop get a reward, strengthening the connection between the blue sign and the experience of having good coffee.
“‘How good the coffee is’ is your reward signal,” Engel said. “You see the sign and you think ‘This is where I get a good coffee.'”
Over time, having wandered into many different coffee shops with a wide variety of signs, the neurons categorize all coffee shops with blue signs as part of a group that consistently provides good coffee.
Which came first?
Since its discovery, this phenomenon, which goes by the name decision-related noise, has been the source of a chicken-and-egg controversy. Is the initial bias due solely to those noisy neurons? Are we really biasing our behaviors due to a slight neuronal noise?
Or maybe there’s another explanation: Do parts of the brain more normally associated with decision-making send signals back to these sensory neurons, creating that initial bias and eventually tuning those neurons to accurately detect categories? This is referred to as top-down feedback.
“It could be that our decisions are not just influenced by sensory noise, but once the decision is made in a higher cortical area it feeds back to the sensory system,” Engel said.
A computer model of learning
Engel created a computer model that mimics the various brain regions involved in sensing the world and learning to form categories. In her model she trained those neurons to categorize moving dots into two groups, similar to the animal studies. Then, she fiddled with how those neurons behaved.
In one case, she modeled sensory neurons that had no decision bias. Faced with moving dots, these neurons had no bias about whether they called the dots as belonging to one category over another. Without a bias that could link behavior to a reward or punishment, the neurons never learned to categorize how the dots were moving.
“This work says that decision-related noise is important for learning, which is unexpected,” Engel said.
Their work did not resolve the question of where the bias originates: Was it noise in the sensory system that biased behavior, or was that noise created through feedback from other parts of the brain? “I think the true answer is a combination of the two,” Engel said. Noise is important, but so is the relationship with other neurons that amplify the decision.
Engel is now working with Kwabena Boahen, a professor of bioengineering, and Tirin Moore, a professor of neurobiology, to continue unraveling the role of neuronal noise in how we learn. The two, who are both Stanford Bio-X affiliates, are neighbors in the Clark Center, where Boahen has developed a computer chip that more accurately mimics brain function and Moore does experimental work on brain function. “Coming to Stanford gave me an exceptional opportunity to collaborate closely with experimentalists and to work with very exciting datasets,” Engel said.
Additional authors on the paper include Warasinee Chaisangmongkon, a graduate student at Yale, and David J. Freedman, associate professor of neurobiology at the University of Chicago.
The work was funded by the National Institutes of Health, the Swartz Foundation, the Kavli Foundation, the McKnight Endowment Fund for Neuroscience and the Alfred P. Sloan Foundation.