Do we need complexity to understand the brain?

Throughout the book, we’ve been describing a systems view of brain function that encourages reasoning in terms of large-scale, distributed circuits. But when we adopt this stance, pretty early on it becomes clear that “things are complicated”. Indeed, we might be asked if we need to entangle things that much. Shouldn’t we attempt simpler approaches and basic explanations first? After all, an important principle in science is parsimony, discussed in reference to what’s called Occam’s razor after the Franciscan friar William of Ockham’s dictum that pluralitas non est ponenda sine necessitate: “plurality should not be posited without necessity.” In other words, keep it simple, or as Einstein is often quoted, “everything should be made as simple as possible, but no simpler.”

Chaos | Chapter 7 : Strange Attractors - The butterfly ...
Lorenz strange attractor (from https://i.ytimg.com/vi/aAJkLh76QnM/maxresdefault.jpg)

This idea makes sense, of course. Consider a theory T proposed to explain a set of phenomena. Now suppose that an exception to T is described, say, a new experimental observation that is inconsistent with it. While not good for proponents of T, the finding need not be the theory’s death knell. One possibility is to extend T so that it can handle the exception, thereby avoiding the theory from being falsified. More generally, when T fails, find a way to extend the theory so that the piece of data that is not handled by it now is. As T breaks down further and further it could be gradually extended to explain the additional observations with a series of, possibly, ad hoc extensions. That’s clearly undesirable. At some point, the theory in question is so bloated that simpler explanations would be heavily favored in comparison.

Whereas parsimony is abundantly reasonable as a general approach, what counts as “parsimonious” isn’t exactly clear. In fact, that’s where the rubber hits the road. Take an example from physics. In 1978, an American astronomer, Vera Rubin[1], noted that stars in the outskirts of galaxies were rotating too fast, contradicting what would be predicted by our theory of gravity. It was as if the mass observed in the universe was not enough to keep the galaxies in check, triggering a vigorous search for potential sources of mass not previously accounted for. Perhaps the mass of black holes that had not been tallied properly? Other heavy objects such as neutron stars? But when everything known was addade up, the discrepancy was – and still is – huge. Actually, five times more mass would be needed than what we have been able to put on the scale.

To explain the puzzle, physicists postulated the concept of dark matter, a type of matter previously unknown and possibly made of as-yet undiscovered subatomic particles. This would account for a whapping 85% of the mass of the universe We see that to solve the problem, physicists left the standard theory of gravitation (let’s call it G) unmodified but had to postulate an entirely new type of matter (call it m). That’s not a small change! In 1983, the Israeli physicist Mordechai Milgrom proposed a different solution, a relatively small change to G. In his “modified gravity” theory, gravity works as usual except when considering such massive systems as entire galactic systems. Without getting into the details here, the change essentially involved adding a new constant, called a0, to the standard theory of gravity[2].

So, our old friend G doesn’t work. We can either consider {G + m} or {G + a0} as potential solutions. Physicists have not been kind to the latter, and considered it rather ad hoc. In contrast, they have embraced the former and devoted monumental efforts to find new kinds of matter that can tip the scales in the right direction. At present the mystery is unsolved and larger and more precise instruments continue to be developed in the hope of cracking the problem. The point of this brief incursion into physics was not to delve into the details of the dispute, but to illustrate that parsimony is easier said than done. What is considered frugal in theoretical terms depends very much on the intellectual mindset of a community of scientists. And, human as they are, they disagree.

Another example that speaks to parsimony relates to autonomic brain functions that keep the body alive – for example, regulating food and liquid intake, respiration, heart activity, and the like. The anatomical interconnectivity of this system has posed major challenges to deciphering how particular functions are implemented. One possibility, along the lines proposed in this book, is that multi-region interactions collectively determine how autonomic processes work. But consider an alternative position. In an influential review, the Clifford Saper[3] stated that “although network properties of a system are a convenient explanation for complex responses, they tell us little about how they actually work, and the concept tends to stifle exploration for more parsimonious explanations.” According to him, the “highly interconnected nature of the central autonomic control system has for many years served as an impediment to assigning responsibility for specific autonomic patterns” to particular groups of neurons.

It’s not a stretch to say that thinking in terms of complex systems is not entirely natural to most biologists. One could say that they in fact view it with a non-trivial amount of suspicion, as if this approach overcomplicates things. Their training emphasizes other skills after all. Unfortunately, if left unchecked, the drive toward simple explanations can lead researchers to adopt distorted views of biological phenomena, as when proposing that a “schizophrenia gene” explains this devastating condition, or that a “social cognition brain area” allows humans, and possibly other primates, to have behavioral capabilities not seen in other animals. Fortunately, neuroscience is gradually changing to reflect a more interactionist view of the brain. From the earlier goal of studying how regions work, current research takes the challenge of deciphering how circuits work to heart.

A key aspect of the scientific enterprise is conceptual. Scientists decide the important questions that should be studied by accepting or rejecting papers in the top journals, funding particular research projects, and so on. Many of these judgements are passed from “outside of science”, in the sense that they are not inherent to the data collected by scientists. How one studies natural phenomena is based on accepted approaches and methods of practicing researchers. Accordingly, the position to embrace or shun complex systems is a collective viewpoint. To some, network-based explanations are too unwieldly and lacking in parsimony. In diametrical contrast, explanations heavily focused on localized circuits can be deemed as oversimplistic reductionism, or purely naive.

In the end, science is driven by data, and the evidence available puts pressure on the approaches adopted. Whereas mapping the human genome took a decade and three billion dollars at first, it can be done routinely now under a thousand dollars in less than two days, enabling unprecedented views of genetics. In the case of the brain, it’s now possible to record thousands of neurons simultaneously, opening a window into how large groups of neurons are involved in behaviors in ways that weren’t possible before. (Remember that most of what we know about neurophysiology relied on recordings of individual neurons or very small sets of cells at a time.) For example, in a trailblazing study, Engert and collaborators recorded single neurons across the entire brain of a small zebrafish[4]. Their goal was to record not most but all the creature’s neurons! Perhaps one day in the not-so-distant future the same will be possible for larger animals.


[1] http://www.astronoo.com/en/articles/mond-theory.html

[2] http://www.scholarpedia.org/article/The_MOND_paradigm_of_modified_dynamics

[3] Saper, C. B. (2002, p. 460).

[4] Zebrafish during the larval stage (Ahrens et al., 2012).