Is the Brain a Useful Model for Artificial Intelligence?

Thinking machines think just like us—but only up to a point.
Machine cogs and chains forming brain stem
Photograph: John M. Lund/Getty Images

In the summer of 2009, the Israeli neuroscientist Henry Markram strode onto the TED stage in Oxford, England, and made an immodest proposal: Within a decade, he said, he and his colleagues would build a complete simulation of the human brain inside a supercomputer. They'd already spent years mapping the cells in the neocortex, the supposed seat of thought and perception. “It's a bit like going and cataloging a piece of the rain forest,” Markram explained. “How many trees does it have? What shapes are the trees?” Now his team would create a virtual rain forest in silicon, from which they hoped artificial intelligence would organically emerge. If all went well, he quipped, perhaps the simulated brain would give a follow-up TED talk, beamed in by hologram.

Markram's idea—that we might grasp the nature of biological intelligence by mimicking its forms—was rooted in a long tradition, dating back to the work of the Spanish anatomist and Nobel laureate Santiago Ramón y Cajal. In the late 19th century, Cajal undertook a microscopic study of the brain, which he compared to a forest so dense that “the trunks, branches, and leaves touch everywhere.” By sketching thousands of neurons in exquisite detail, Cajal was able to infer an astonishing amount about how they worked. He saw that they were effectively one-way input-output devices: They received electrochemical messages in treelike structures called dendrites and passed them along through slender tubes called axons, much like “the junctions of electric conductors.”

Cajal's way of looking at neurons became the lens through which scientists studied brain function. It also inspired major technological advances. In 1943, the psychologist Warren McCulloch and his protégé Walter Pitts, a homeless teenage math prodigy, proposed an elegant framework for how brain cells encode complex thoughts. Each neuron, they theorized, performs a basic logical operation, combining multiple inputs into a single binary output: true or false. These operations, as simple as letters in the alphabet, could be strung together into words, sentences, paragraphs of cognition. McCulloch and Pitts' model turned out not to describe the brain very well, but it became a key part of the architecture of the first modern computer. Eventually, it evolved into the artificial neural networks now commonly employed in deep learning.

These networks might better be called neural-ish. Like the McCulloch-Pitts neuron, they're impressionistic portraits of what goes on in the brain. Suppose you're approached by a yellow Labrador. In order to recognize the dog, your brain must funnel raw data from your retinas through layers of specialized neurons in your cerebral cortex, which pick out the dog's visual features and assemble the final scene. A deep neural network learns to break down the world similarly. The raw data flows from a large array of neurons through several smaller sets of neurons, each pooling inputs from the previous layer in a way that adds complexity to the overall picture: The first layer finds edges and bright spots, which the next combines into textures, which the next assembles into a snout, and so on, until out pops a Labrador.

Despite these similarities, most artificial neural networks are decidedly un-brainlike, in part because they learn using mathematical tricks that would be difficult, if not impossible, for biological systems to carry out. Yet brains and AI models do share something fundamental in common: Researchers still don't understand why they work as well as they do.

What computer scientists and neuroscientists are after is a universal theory of intelligence—a set of principles that holds true both in tissue and in silicon. What they have instead is a muddle of details. Eleven years and $1.3 billion after Markram proposed his simulated brain, it has contributed no fundamental insights to the study of intelligence.

Part of the problem is something the writer Lewis Carroll put his finger on more than a century ago. Carroll imagined a nation so obsessed with cartographic detail that it kept expanding the scale of its maps—6 yards to the mile, 100 yards to the mile, and finally a mile to the mile. A map the size of an entire country is impressive, certainly, but what does it teach you? Even if neuroscientists can re-create intelligence by faithfully simulating every molecule in the brain, they won't have found the underlying principles of cognition. As the physicist Richard Feynman famously asserted, “What I cannot create, I do not understand.” To which Markram and his fellow cartographers might add: “And what I can create, I do not necessarily understand.”

It's possible that AI models don't need to mimic the brain at all. Airplanes fly despite bearing little resemblance to birds. Yet it seems likely that the fastest way to understand intelligence is to learn principles from biology. This doesn't stop at the brain: Evolution's blind design has struck on brilliant solutions across the whole of nature. Our greatest minds are currently hard at work against the dim almost-intelligence of a virus, its genius borrowed from the reproductive machinery of our cells like the moon borrows light from the sun. Still, it's crucial to remember, as we catalog the details of how intelligence is implemented in the brain, that we're describing the emperor's clothes in the absence of the emperor. We promise ourselves, however, that we'll know him when we see him—no matter what he's wearing.


KELLY CLANCY (@kellybclancy) is a neuroscientist at University College London and DeepMind. She wrote about fatal familial insomnia, a rare disease, in issue 27.02.

This article appears in the June issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


Special Series: The Future of Thinking Machines