Memory and perception appear to be completely different experiences, and neuroscientists have been confident that the brain also produces them differently. But in the 1990s, neuroimaging research revealed that parts of the brain that were thought to be active only during sensory perception were also active during memory recall.
“It started to raise the question of whether memory representation is actually different from perceptual representation at all,” said Sam Ling, associate professor of neuroscience and director of the Visual Neuroscience Laboratory at Boston University. Could it be that our memory of a beautiful forest clearing, for example, is simply a recreation of the neural activity that previously allowed us to see it?
“The argument has changed from this debate about whether there is any involvement of the sensory cortices at all to saying, ‘Oh, wait a minute, is there any difference?'” said Christopher Baker, a researcher at the National Institute of Mental Health who leads a module on learning and plasticity. “The pendulum is swinging from one side to the other, but it’s swinging too far.”
Even if there is a very strong neurological similarity between memories and experiences, we know that they cannot be exactly the same. “People don’t confuse them,” said Serra Favila, a postdoctoral fellow at Columbia University and lead author of a recent Nature Communications study. Her team’s work has identified at least one of the ways that memories and image perceptions are put together differently at the neurological level.
Blurred spots
When we look at the world, visual information about it flows through the photoreceptors of the retina and into the visual cortex, where it is processed sequentially in different groups of neurons. Each group adds new levels of complexity to the image: simple points of light become lines and edges, then contours, then shapes, then complete scenes that embody what we see.
In the new study, the researchers focused on a feature of vision processing that is very important in early sets of neurons: where things are in space. The pixels and contours that make up an image must be in the right places, otherwise the brain will create a jumbled, unrecognizable distortion of what we see.
The researchers trained participants to remember the positions of four different patterns on a target-like background. Each pattern was placed in a very specific location on the board and associated with a color in the center of the board. Each participant was tested to make sure they remembered this information correctly – that if they saw a green dot, for example, they knew the star shape was in the leftmost position. Then, as the participants perceived and remembered the locations of the patterns, the researchers recorded their brain activity.
Brain scans allowed researchers to map how neurons record where something is, and how they later remember it. Each neuron attends to a single space or “receptive field” in the space of your vision, such as the lower left corner. The neuron “will only fire when you put something in that little spot,” Favila said. Neurons that are tuned to a particular location in space tend to cluster together, making their activity easy to detect in a brain scan.
Previous studies of visual perception have found that neurons at early, lower levels of processing have small receptive fields, and neurons at later, higher levels have larger ones. This makes sense because higher-level neurons compile signals from many lower-layer neurons, extracting information across a wider portion of the visual field. But a larger receptive field also means lower spatial precision, creating an effect like putting a big blob of ink over North America on a map to indicate New Jersey. In fact, visual processing during perception is a matter of small clear dots turning into larger, blurrier, but more meaningful blobs.
Add Comment