New discoveries suggest a networked brain that loves to make a break rules, and thrives on mistakes, in search of a deeper truth.
In a San Francisco operating room, a sterilized saw circles the skull. The bone casing is removed, revealing the brain, pulsating beneath a pitiable slurry of blood. The patient is awake, with no protection for his three pounds of poorly differentiated gray matter. It is both breathtaking and absurd that these constellations of folded nerves, these gooey lumps, guide the organs, process vision, and convey the self.
Before surgeons remove the patient’s tumor, Robert T. Knight’s students place a grid of electrodes directly on the brain tissue–to conduct a language experiment. Knight, a professor of psychology and neuroscience who heads Berkeley’s Helen Wills Neuroscience Institute, later reviews a film of this hybrid research and surgery at UCSF on his laptop computer. Even after years of study, he says, watching the brain is still “the real film noir.” He admires the brute realism of the exposed brain. Even more fascinating is the strange information discerned by the electrodes.
For decades, researchers have placed electrodes on the skull, but technology’s march toward miniaturization has now enabled Knight to set his delicate crosshatch of electrodes directly on the brain. During the language experiment, the patient reacts to a spoken noun (ball) by saying a verb (throw), and the computer represents a 12-millisecond segment of the interior cosmic symphony. “We measure the blood flow while the patient talks, looking to extract word meaning,” Knight explains. The flows register in different colors on the computer screen. Enunciation of the word ball elicits first a red rush of energy in a region devoted to auditory processing, which in turn fires the nearby language center for verbs. In tune, the verb area registers a cold blue–perhaps giving energy to the noun, perhaps somehow preparing its own effort. There are flashes of yellow and then red as the patient talks. A motor area awakens briefly as the patient’s mouth shapes the word, before the auditory region registers the speech in a brief, pale-red coda.
We are what our brains make us, and we have learned that we make our brains, too. Pianists grow an enlarged motor cortex, different from that of violinists. Bird watchers recruit the brain’s “fusiform face area”–the place where you recognize Mom–to identify avians. So do car buffs esteeming, say, the Corvette, and there are stories of amputees whose facial neurons hijack those devoted to their missing limb. No other organ demands understanding, continual theorizing, and experimentation the way the brain does–you would not ask the liver to explain itself, nor the foot to contemplate running–and in this, the golden age of brain research, the rules are still being discerned.
Our brains seem to thrive on both rule-making, which forms the basis of patterns and abstraction, and the rule-breaking work of metaphor and conceptual breakthrough. Metaphors also matter in making sense of new data: Researchers employ a metaphor in describing the “hierarchy” of nerves and networks in the brain–partly to attach the new information to what we already know of our world. Metaphors are a relation of known things to the novel and the nearly unthinkable. In a way, they are mistakes of fact–flesh is not really grass, the prophet Isaiah to the contrary–that deliver deeper truths.
The smallest and most ordinary thing, a man imagining a game of catch amid his brain’s innumerable tasks, was unseen until a few years ago. Now, it’s part of an elaborate mystery. We have entered the second generation of contemporary neuroscience. In the first generation, we mapped parts of the brain by function. Now we must understand the meaning of interactions among the parts, the way they use each other and time to function as a whole. This network of interactions, many researchers believe, is in some way responsible for our consciousness.
Earlier work concluded that the brain’s electrochemical oscillations topped out at a frequency of about 60 hertz. In fact, weak signals between 70 and 200 Hz were simply unable to pierce the skull in detectable quantities, as were certain very-low-frequency signals of less than 10 Hz. Knight and others believe that oscillations between brain areas, influenced by these lower-frequency signals, are a critical part of what the brain does. Down the hall from his office, students examine the survivor of a gang shooting with a bullet still in his brain. They use a grid of 256 electrodes, four times the old standard of 64 electrodes, revealing new subtleties of brain functioning. When technology doubles that density again, the thinking is, we will be able to image oscillations in a single column of the cortex, demonstrating the brain’s own conversation with itself. With this understanding of information flow from one point to another, scientists may be able to posit a new model for an unsupervised, networked brain.
Proving this model may take another cocksure genius synthesizer, like Einstein in physics or Watson for DNA, and it could easily be a century of work getting enough data for our genius to make the leap. One professor at Brandeis, Eve Marder, has for over two decades studied a cluster of about 30 neurons that govern the digestive tract of the lobster. Yet this computational dynamic remains only dimly understood.
For all we can say today about our consciousness, with its wondering and metaphor making, it may be but a byproduct of an infinitely complex system. There is already evidence that conscious choice is really an afterthought, an affirmation delivered by the “higher” parts of the brain to justify tasks already underway. But just as the latest technology upsets old models, new tools may derail the “brain as networked event” metaphor, which will then have to mutate to accommodate fresh data. In the meantime, it is likely that we will see discoveries that could, say, end paralysis. Jose M. Carmena, a professor at both Helen Wills and the engineering department, has already created a system by which a primate can move a prosthetic limb simply by thinking about moving it.
This is thought made manifest, a miracle driven by a tragic necessity. It is another pas de deux in the landscape of the brain. Our brain and our experience shape each other—perhaps we become bird watchers because of generous fusiform areas, and then choose birds over cars because of our childhoods. Our tools and our understanding are likewise intertwined. Computers are of course a crucial tool in brain study. And the lessons we learn from our tools in turn shape development of our silicon-based “thinking machines.”
Given the levels of data we now pile up in our computer networks, there is an urgency to understanding how the brain makes order from its own far larger multitude of random data. Technologists press for brain information as never before, and plenty of researchers have startup companies on the side. In the other direction, Microsoft billionaire Paul Allen has sponsored an online “brain atlas” tapped by 10,000 researchers a month. Closer to home, tech millionaire Jeff Hawkins in 2002 founded the nonprofit Redwood Neuroscience Institute (RNI), a group of more than a dozen interdisciplinary researchers who develop computational models of the brain’s underlying mechanisms. Hawkins, an engineer who earlier developed the PalmPilot, himself is busy synthesizing and extending many existing theories in an effort to build entirely new kinds of computer software.
Hawkins draws from several decades of research and posits that the brain is organized as a hierarchy of memory across many levels. Each level abstracts what the preceding level has learned, seeking patterns and making predictions. The eye transforms light to signals that, as they move up the brain’s functional levels for processing vision, are assembled and processed as representing, say, a cat. Over time the higher levels realize a general picture of what any cat looks like, from any angle. There is interplay among the abstractions and the kinds of learning, so that we casually know that a cat is not a dog, but cats and dogs are pets, although big cats, which aren’t those kinds of cats, usually aren’t pets.
Last November Hawkins gave a well-received keynote speech at a worldwide meeting of 30,000 neuroscientists. But he is more interested in making things. In 2005, he turned RNI over to the Helen Wills Institute (where it was redubbed the Redwood Center for Theoretical Neuroscience) so he could focus on Numenta, a company that is building software based on this theory of memory-based learning and abstraction among a hierarchy of nodes. One California firm is implementing the software on behalf of a major oil company, to look for unusual patterns of power use and environmental change in an offshore platform.
The Numenta software seeks abstractions and develops generalized rules no matter what it is examining. This is a radical departure from traditional computing, which is most successful when it is designed for specific tasks, with memory stored and fetched as needed. If Numenta works, it will be revolutionary. But Hawkins has skeptics, both in academia and the private sector. Knight says, “the physiology is correct, but it would be a good idea to find things to back up the theory.” Peter Norvig, an author of texts on artificial intelligence who now heads up research at Google, has for now given up on Hawkins-style aspirations of making software structured like the brain, to focus on improved statistical models and massive number crunching.
Hawkins is unfazed. “I wouldn’t want to compare myself,” he says, “but Einstein was confident about relativity before there was proof. Watson knew the double helix was right for DNA.” Besides, the engineer is not really concerned over proving his theories of the brain. “If it turns into a successful technology, that is how we’ll make the most progress finding out what the brain really is doing.”
Such error-based success would join a rich tradition of scientific and technological advancement through incorrect theory, a tradition with a notable pedigree in the study of our own brains. René Descartes drove apart the brain and the mind and posited that mental activity was sorted out by the soul from its position on the pineal gland (which he incorrectly thought was the only unitary element in the divided brain). The early 19th-century physician Franz-Joseph Gall studied crania and correctly deduced that the brain is differentiated into regions. But he was wrong about what they did, and thought they pushed out the skull, giving us another quack science—phrenology.
Indeed, without mistakes we might not have computers. The binary math underlying their operations was reputedly developed in light of a misreading of the I Ching when it was first imported to Europe from China. Based on his own understanding of mental processes, Alan Turing laid the foundations of computer science with an orderly series of steps—the algorithm—that most researchers now doubt bears much relation to the way our brains actually work.
It is disputable whether Turing actually intended his “universal machine” to stand for all mental activity, but that hasn’t stopped generations of artificial intelligence fans. Of late this group, computer science professors and sharp Internet millionaires among them, have congratulated themselves by drawing a line that traces the increase in the number of transistors fitting on a computer chip, from a few to today’s millions. They continue this line to a time some four decades hence when a chip will hold transistors equaling the number of neurons in a brain. By implication, we will have a brain.
The idea makes neuroscientists cringe. “These guys don’t know anything about how a brain works,” says Bruno Olshausen, director of the Redwood Center and a fan of Hawkins’s theories. “They don’t account for the levels in the brain, any of the complexity beyond neural signaling.” The oscillations Knight has found, for example, might be a completely different kind of communication: a network event without simple on/off signaling.
Still, today’s technology wunderkinder, in love with their own tools, yearn for a transhuman future. At a recent lunch with a young Internet mogul who made a fortune on PayPal, I dipped sardines in blue-tinted Jurassic salt while he enthused about our highly networked future. We will disappear into the machine, and vice versa, he told me, as networked chips implanted in our brains update our memories daily, providing us with the latest news. “Who knows what will be in our brains?” he asked and then added, “They will be hacked!” Still apparently eager for this future, he left lunch early to work at hastening its arrival.
The brain delights in making rules and breaking them in search of higher truth. Silvia A. Bunge at Helen Wills studies the supposed top of the rule-making hierarchy: the anterior prefrontal cortex. “Sometimes the brain chooses to break the rules it has formed,” she says. “Knowing how it knows things is exactly our problem.” The brain’s experience, her work has shown, lays down rules and habits that are seemingly firm, but the various layers develop at vastly different rates. So if caught early, perhaps certain tendencies could be diverted, even steering an individual from a life of crime. Bunge believes, for example, that a higher level of vocabulary spoken in a household can grow a better brain.
Interest in how the mind’s rules are formed and reformed has already spawned hybrid disciplines such as neurolaw (the brain and crime), neuroeconomics (why your brain chooses one thing over another), even neuromarketing (you don’t want to know). The military is said to be interested, modeling a helmet that could provide stimulation to keep soldiers awake, possibly even prevent post-traumatic stress by erasing some memory of battle.
“We have to proceed with caution and humility,” says Bunge. “There is so much that we don’t know, and what we do know will be superceded.”
Future research, like everything else in the future, will arrive consisting mostly of the past. One of the most difficult aspects in contemplating the model of the oscillating, interacting, rule-making and -breaking brain is to stop thinking of Descartes’s soul, resting on the pineal gland and pulling the levers. Or, to update the metaphor in more Turing-esque terms, the challenge is to think about this computer as running without any operating system. Today’s metaphor is closer to the Internet, an infinite network of nodes, operating both independently and in some kind of unmanaged, collective way.
The anterior prefrontal cortex sits atop the brain’s hierarchy and contains the greatest “arborization,” or branching, of nerve dendrites. That means it is likely integrating the most information, sorting one idea from another—but that does not mean it is in charge. “It is a network of regions, but that does not mean it is its own controller,” Bunge says.
The Cartesian model will likely give way to some kind of quantum understanding of the brain, where time becomes slippery and rules bend as much as they differ. One thing is certain; the search for understanding will have its rewards. The anterior prefrontal cortex may, it seems, receive a dose of pleasure-inducing dopamine from one of the brain’s more “primitive” regions when a new rule is successfully found. This moment of happiness may be a kind of aesthetic bliss that we feel when the world is seen in a more orderly way, or when a poetic truth is revealed.