Close Mobile Menu

A Scanner Smartly

September 16, 2009
by Kara Platoni
a cartoon of a man looking inside someone's head

Researchers are learning how to “read” your visual cortex

Imagine a machine that can tell from your brain activity alone, what images you’ve just seen. It sounds like something straight out of a Philip K. Dick novel.

It actually comes straight out of Berkeley’s psychology department. In an experiment described this March in Nature two subjects viewed 1,750 photographs while the research team used functional magnetic resonance imaging (fMRI) to measure the participants’ neural activity by tracking blood flow in the visual cortex. From the patterns that emerged, the researchers constructed a computational model to predict how the brain would react to any photograph. “We’re trying to build a quantitative relationship between the actual physical stimulus and the brain activity,” explains study co-author Kendrick Kay ’09, a psychology Ph.D. candidate who volunteered his own brain for the tests. (No small commitment, that—it involved spending hours immobilized inside a car-sized scanner.)

Next came the step the researchers compare to a magician’s “pick a card, any card” trick. The subjects viewed another, totally different set of photos, and the program guessed which pictures they had seen. It was remarkably accurate: correct about 90 percent of the time when choosing from a set of 120 images, and 80 percent when choosing from 1,000. The researchers extrapolated that out of a billion images (Google currently has 880 million images indexed), the computational model would be correct 20 percent of the time.

The Berkeley study wasn’t the first of its kind, but other experiments generally used simple shapes such as grids for the visual stimuli. By contrast, the Berkeley team used grayscale photos of complex subjects: animals, people, buildings, and landscapes. Where other studies had merely categorized images the subject had seen—say, houses or tools—the Berkeley study was able to identify the precise photo that had been seen.

Although this work has been hailed as a step towards technology that could play back your visual memories, or perhaps even your dreams, Kay says that real mind-reading is probably still a century or two away. The current decoding program can’t re-draw the images people viewed; it only selects the viewed ones from an assortment. More importantly, the program can only parse what is seen by the actual eye, not the mind’s eye. More complex brain functions, such as memory, intention, and imagination, are involved in thought than in vision.

“Can we do this task of decoding if the subjects were just closing their eyes and imagining something?” asks Kay. It has been tried, he says, with no luck. Nevertheless, Kay believes it’s theoretically possible. “As long as we can measure the brain well enough and understand what the brain is doing, in principle we could decode anything. But those are two big ifs,” he says. “For example, the spatial resolution of fMRI is good, but it’s nowhere near capturing the activity of every single neuron in the brain.”

For now, study co-author Jack Gallant, an associate professor of psychology, says that the team merely wants to develop a more complete model of visual processing. Such a model could perhaps be used to help doctors diagnose impairments like those caused by stroke or dementia, or to evaluate how well treatments are working. Kay imagines that “brain-based drawing” might exist someday as well. “Suppose you’re a bad artist but you can visualize what you like, and it gets translated onto the page.”

The team frankly admits that a machine that can peer into the mind could be used in some very creepy ways. For now, you can’t read someone’s visual cortex unless they agree to spend serious time in a scanner, but what if future adaptations of the technology were subtle enough to be used without consent? What if one of the more plausible uses for vision recall—aiding eyewitness testimony in courtrooms or police work—simply backfires? After all, people often misinterpret what they see, and any “read-out” of their memory would be as flawed as their initial perception.

The team acknowledges that current methods of decoding brain activity are still “relatively primitive,” but these issues are only going to get more complex. “It’s OK to worry about them now even though it’s not a problem now,” says Kay. “Far in the future they will be, so it’s good to be on the lookout.”

Share this article