Neuroscientists therefore use an approach called “dimensionality reduction” to make this visualization possible: they take data from thousands of neurons and, applying clever linear algebra techniques, describe their activities using just a few variables. This is exactly what psychologists did in the 1990s to define their five main domains of human personality: openness, agreeableness, conscientiousness, extroversion and neuroticism. They found that by simply knowing how an individual scored on these five traits, they could effectively predict how that person would answer hundreds of questions on a personality test.
But the variables extracted from the neural data cannot be expressed in a single word like “openness”. They are more like patterns, patterns of activity that span entire neuronal populations. A few of these patterns can define the axes of a plot, in which each point represents a different combination of these patterns, its own unique activity profile.
There are downsides to reducing data from thousands of neurons to just a few variables. Just as taking a 2D image of a 3D cityscape renders some buildings totally invisible, cramming a complex set of neural data into just a few dimensions eliminates a lot of detail. But working in a few dimensions is much more manageable than looking at thousands of individual neurons at once. Scientists can plot evolving patterns of activity on the axes defined by the patterns to observe how the behavior of neurons changes over time. This approach has proven particularly successful in the motor cortex, a region where confusing and unpredictable responses from a single neuron have long baffled researchers. However, seen collectively, neurons trace regular, often circular trajectories. The characteristics of these trajectories correlate with particular aspects of motion – their location, for example, is related to speed.
Olsen says he expects scientists to use dimensionality reduction to extract interpretable patterns from complex data. “We can’t do it neuron by neuron,” he says. “We need statistical tools, machine learning tools, that can help us find structure in big data.”
But this vein of research is still in its infancy, and scientists are struggling to agree on what the patterns and trajectories mean. “People fight all the time about whether these things are factual,” says John Krakauer, professor of neurology and neuroscience at Johns Hopkins University. “Are they real? Can they be interpreted so easily [as single-neuron responses]? They don’t feel as grounded and concrete.
Bringing these trajectories back to earth will require the development of new analytical tools, Churchland says, a task that will surely be aided by the availability of large-scale datasets like those from the Allen Institute. And the institute’s unique capabilities, with its deep pockets and huge research staff, will allow it to produce greater masses of data to test these tools. The institute, says Olsen, operates like an astronomical observatory – no lab could pay for its technologies, but the entire scientific community benefits from and contributes to its experimental capabilities.
Currently, he says, the Allen Institute is working on drive a system where scientists from across the research community can suggest what kinds of stimuli animals should be shown and what kinds of tasks they should perform, while thousands of their neurons are recorded. As recording capacities continue to increase, researchers strive to design richer, more realistic experimental paradigms to observe how neurons respond to the kinds of challenging real-world tasks that push their collective abilities. “If we really want to understand the brain, we can’t just show bars pointing to the cortex,” says Fusi. “We really have to move forward”