Causation in the brain: Do complex systems play a role?

Nowhere else is the challenge of embracing complex systems greater than when confronting the problem of causation. “What causes what” is the central problem in science, at the very core of the scientific enterprise.

Diagrams of causal frameworks. (A) Simple billiard ball scheme of
causation. (B) The two balls are connected by a spring, and the goal of explanation is not to clarify where ball 2 ends up. Instead, when the initial force is applied to ball 1, the goal is to understand the evolution of the ball1—ball2 system as evolves temporally. (C) More generally, a series of springs with different coupling properties links the multiple elements in the system, which will exhibit considerably more complex dynamics.
From Pessoa, L. (2017). Cognitive-motivational interactions: Beyond boxes-and-arrows models of the mind-brain. Motivation Science, 3(3), 287.

One of the missions of neuroscience is to uncover the nature of signals in different parts of the brain, and ultimately what causes them. A type of reasoning that is prevalent is what I’ve called the billiard ball model of causation[1]. In this Newtonian scheme, force applied to a ball leads to its movement on the table until it hits the target ball. The reason the target ball moves is obvious; the first ball hits it, and via the force applied to it, it moves. Translated into neural jargon, we can rephrase it as follows: a signal external to a brain region excites neurons, which excite or inhibit neurons in another brain region via anatomical pathways connecting them. But this way of thinking, which has been very productive in the history of science, is too impoverished when complex systems – the brain for one – are considered.

We can highlight two properties of the brain that immediately pose problems for standard, Newtonian causation[2]. First, anatomical connections are frequently bidirectional, so physiological influences go both ways, from A to B and back. If one element causally influences another while the second simultaneously causally influences the first, the basic concept breaks down. Situations like this have prompted philosophers to invoke the idea of “mutual causality”[3]. For example, consider two boards arranged in a Λ shape so that their tops are leaning against each other; so, each board is holding the other one up. Second, convergence of anatomical projections implies that multiple regions concurrently influence a single receiving node, making the attribution of unitary causal influences problematic.

If the two properties above already present problems, what are we to make of the extensive cortical-subcortical anatomical connectional systems and, indeed, the massive combinatorial anatomical connectivity discussed in Chapter 9? If, as proposed, the brain basis of behavior involves distributed, large-scale cortical-subcortical networks, new ways of thinking about causation are called for. The upshot is that Newtonian causality provides an extremely poor candidate for explanation in non-isolable systems like the brain.

What are other ways of thinking about systems? To move away from individual entities (like billiard balls), we can consider the temporal evolution of “multi-particle systems”, such as the motion of celestial bodies in a gravitational field. Physicists and mathematicians have studied this problem for centuries, which was central in Newtonian physics. For example, what types of trajectories do two bodies, such as the earth and the sun, exhibit? This so-called two-body problem was solved by Johann Bernoulli in 1734. But what if we’re interested in three bodies, say we add the moon to the mix? The answer will be surprising to readers who think the problem “should be easy”. On the contrary, this problem has vexed mathematicians for centuries, and in fact cannot be solved! At least not in the sense that the two-body, because it doesn’t admit to a general mathematical solution.

So, what can be done? Instead of analytically solving the problem, one can employ the laws of motion based on gravity and use computer simulations to determine future paths[4]. For example, if we know the position of three planets at a given time, we can try to determine their positions in the near future by applying equations that explicitly calculate all intermediate positions. In the case of the brain, where we don’t have comparable equations, we can’t do the same. But we can extract a useful lesson, and think of the joint state of multiple parts of the brain at a given time. How does this state, which can be summarized by the activity level of brain regions, change with time?

Before describing some of these ideas further, I’ll propose one more reason thinking in terms of dynamics is useful. For that, we need to go back in time a little.


[1] Pessoa (2017, 2018). 2017: Motivation Science; 2018: CONB.

[2] Mannino and Bressler (2015).

[3] Hausman (1984); Frankel (1986).

[4] Computational investigations in the past years have revealed a large number of families of periodic orbits (Šuvakov and Dmitrašinović, 2013).

Do we need complexity to understand the brain?

Throughout the book, we’ve been describing a systems view of brain function that encourages reasoning in terms of large-scale, distributed circuits. But when we adopt this stance, pretty early on it becomes clear that “things are complicated”. Indeed, we might be asked if we need to entangle things that much. Shouldn’t we attempt simpler approaches and basic explanations first? After all, an important principle in science is parsimony, discussed in reference to what’s called Occam’s razor after the Franciscan friar William of Ockham’s dictum that pluralitas non est ponenda sine necessitate: “plurality should not be posited without necessity.” In other words, keep it simple, or as Einstein is often quoted, “everything should be made as simple as possible, but no simpler.”

Chaos | Chapter 7 : Strange Attractors - The butterfly ...
Lorenz strange attractor (from https://i.ytimg.com/vi/aAJkLh76QnM/maxresdefault.jpg)

This idea makes sense, of course. Consider a theory T proposed to explain a set of phenomena. Now suppose that an exception to T is described, say, a new experimental observation that is inconsistent with it. While not good for proponents of T, the finding need not be the theory’s death knell. One possibility is to extend T so that it can handle the exception, thereby avoiding the theory from being falsified. More generally, when T fails, find a way to extend the theory so that the piece of data that is not handled by it now is. As T breaks down further and further it could be gradually extended to explain the additional observations with a series of, possibly, ad hoc extensions. That’s clearly undesirable. At some point, the theory in question is so bloated that simpler explanations would be heavily favored in comparison.

Whereas parsimony is abundantly reasonable as a general approach, what counts as “parsimonious” isn’t exactly clear. In fact, that’s where the rubber hits the road. Take an example from physics. In 1978, an American astronomer, Vera Rubin[1], noted that stars in the outskirts of galaxies were rotating too fast, contradicting what would be predicted by our theory of gravity. It was as if the mass observed in the universe was not enough to keep the galaxies in check, triggering a vigorous search for potential sources of mass not previously accounted for. Perhaps the mass of black holes that had not been tallied properly? Other heavy objects such as neutron stars? But when everything known was addade up, the discrepancy was – and still is – huge. Actually, five times more mass would be needed than what we have been able to put on the scale.

To explain the puzzle, physicists postulated the concept of dark matter, a type of matter previously unknown and possibly made of as-yet undiscovered subatomic particles. This would account for a whapping 85% of the mass of the universe We see that to solve the problem, physicists left the standard theory of gravitation (let’s call it G) unmodified but had to postulate an entirely new type of matter (call it m). That’s not a small change! In 1983, the Israeli physicist Mordechai Milgrom proposed a different solution, a relatively small change to G. In his “modified gravity” theory, gravity works as usual except when considering such massive systems as entire galactic systems. Without getting into the details here, the change essentially involved adding a new constant, called a0, to the standard theory of gravity[2].

So, our old friend G doesn’t work. We can either consider {G + m} or {G + a0} as potential solutions. Physicists have not been kind to the latter, and considered it rather ad hoc. In contrast, they have embraced the former and devoted monumental efforts to find new kinds of matter that can tip the scales in the right direction. At present the mystery is unsolved and larger and more precise instruments continue to be developed in the hope of cracking the problem. The point of this brief incursion into physics was not to delve into the details of the dispute, but to illustrate that parsimony is easier said than done. What is considered frugal in theoretical terms depends very much on the intellectual mindset of a community of scientists. And, human as they are, they disagree.

Another example that speaks to parsimony relates to autonomic brain functions that keep the body alive – for example, regulating food and liquid intake, respiration, heart activity, and the like. The anatomical interconnectivity of this system has posed major challenges to deciphering how particular functions are implemented. One possibility, along the lines proposed in this book, is that multi-region interactions collectively determine how autonomic processes work. But consider an alternative position. In an influential review, the Clifford Saper[3] stated that “although network properties of a system are a convenient explanation for complex responses, they tell us little about how they actually work, and the concept tends to stifle exploration for more parsimonious explanations.” According to him, the “highly interconnected nature of the central autonomic control system has for many years served as an impediment to assigning responsibility for specific autonomic patterns” to particular groups of neurons.

It’s not a stretch to say that thinking in terms of complex systems is not entirely natural to most biologists. One could say that they in fact view it with a non-trivial amount of suspicion, as if this approach overcomplicates things. Their training emphasizes other skills after all. Unfortunately, if left unchecked, the drive toward simple explanations can lead researchers to adopt distorted views of biological phenomena, as when proposing that a “schizophrenia gene” explains this devastating condition, or that a “social cognition brain area” allows humans, and possibly other primates, to have behavioral capabilities not seen in other animals. Fortunately, neuroscience is gradually changing to reflect a more interactionist view of the brain. From the earlier goal of studying how regions work, current research takes the challenge of deciphering how circuits work to heart.

A key aspect of the scientific enterprise is conceptual. Scientists decide the important questions that should be studied by accepting or rejecting papers in the top journals, funding particular research projects, and so on. Many of these judgements are passed from “outside of science”, in the sense that they are not inherent to the data collected by scientists. How one studies natural phenomena is based on accepted approaches and methods of practicing researchers. Accordingly, the position to embrace or shun complex systems is a collective viewpoint. To some, network-based explanations are too unwieldly and lacking in parsimony. In diametrical contrast, explanations heavily focused on localized circuits can be deemed as oversimplistic reductionism, or purely naive.

In the end, science is driven by data, and the evidence available puts pressure on the approaches adopted. Whereas mapping the human genome took a decade and three billion dollars at first, it can be done routinely now under a thousand dollars in less than two days, enabling unprecedented views of genetics. In the case of the brain, it’s now possible to record thousands of neurons simultaneously, opening a window into how large groups of neurons are involved in behaviors in ways that weren’t possible before. (Remember that most of what we know about neurophysiology relied on recordings of individual neurons or very small sets of cells at a time.) For example, in a trailblazing study, Engert and collaborators recorded single neurons across the entire brain of a small zebrafish[4]. Their goal was to record not most but all the creature’s neurons! Perhaps one day in the not-so-distant future the same will be possible for larger animals.


[1] http://www.astronoo.com/en/articles/mond-theory.html

[2] http://www.scholarpedia.org/article/The_MOND_paradigm_of_modified_dynamics

[3] Saper, C. B. (2002, p. 460).

[4] Zebrafish during the larval stage (Ahrens et al., 2012).

Measuring all neurons in the brain

Try to contemplate a future device that allows registering in minute detail the behaviors of a cheetah and a gazelle during a chase, including all muscle and skeletal movements. At the same time, we’re capable of recording billions of neurons across the two nervous systems while the entire chase unfolds from before the cheetah initiates the pursuit until its dramatic conclusion.

From: https://www.sulzer.com/-/media/images/about-us/sulzer-technical-review/str/2017-issue-3/nature_speed_cheetah_gazelle_1920.ashx?mw=564&hash=263818FB250C10F71E1F44C9BFE704B965277EDE.jpg

What would we discover? How much of our textbooks would have to be altered?

A radical rethinking might be needed, and a lot would have to be rewritten. An alternative possibility is that many of the experimental paradigms employed to date are quite effective in isolating critical mechanisms that reflect the brain’s functioning in general settings. True, novel findings would be made with new devices and techniques, but they would extend current neuroscience by building naturally upon current knowledge. The first scenario is not idle speculation, however.

So-called naturalistic experimental paradigms are starting to paint a different picture of amygdala function, for example. In one study, a rat was placed at one end of an elongated enclosure and a piece of food was placed midway between the rat and a potential predator, a lego-plus-motor device called a “Robogator.” To successfully obtain the food pellet, the rat had to retrieve it before being caught by the Robogator (don’t worry, capture didn’t occur in practice). The findings[1] were inconsistent with the standard “threat-coding model” that says that amygdala responses reflect fear-like or other related defensive states. During foraging (when approaching the pellet), neurons reduced their firing rate and were nearly silent near the predator. Clearly, responses did not reflect threat per se.

From: Choi, J. S., & Kim, J. J. (2010). Amygdala regulates risk of predation in rats foraging in a dynamic fear environment. Proceedings of the National Academy of Sciences, 107(50), 21773-21777.

Another study recorded from neurons in the amygdala over multiple days, as mice were exposed to different conditions[2]. Mice were exposed to a small open field and were free to explore it. These creatures don’t like to feel exposed, so in the experiment they frequently stayed at the corners of the box a good amount of time. But they also ventured out in the open for periods and navigated around the center of the box with some frequency. The researchers discovered two groups of cells: one that was engaged when the mouse was being more defensive in the corners (these “corner” cells fired more vigorously at these locations), another when the mouse was in an exploratory mode visiting the center of the space (“center” cells fired more strongly when the animal was around the middle of the field). The researchers also managed to record from the exact same cells during more standard paradigms, including fear conditioning and extinction. They then tested the idea that the firing of amygdala neurons tracks “global anxiety”; for instance, they should increase their responses when the animal entered the center of the field in the open-field condition, as well as when they heard the CS tone used in the conditioning part of the experiment. Surprisingly, cells did not respond in this way. Instead, neuronal firing reflected moment-to-moment changes in the exploratory state of the animal, such as during the time window when the animal transitioned from exploratory (for example, navigating in the open field) to non-exploratory behaviors (for example, when starting to freeze).

The above two examples provide tantalizing inklings that there’s a lot to discover – and revise – about the brain. It’s too early to tell, but given the technological advances neuroscience is witnessing, examples are popping up all over the place. For example, a study by Karl Deisseroth and colleagues[3] recorded activity of ~24,000 neurons throughout 34 brain regions (cortical and subcortical). Whereas measuring electrical activity with implanted electrodes typically measures a few cells at a time, or maybe ~100 by using state-of-the-art electrode grids, the study capitalized on new techniques that record calcium fluorescence instead. When cells change their activity, including when they spike, they rely on calcium-dependent mechanisms. In genetically-engineered mice, neurons literally glow based on their calcium concentration. By building specialized microscopes, it is possible to detect neuronal signaling across small patches of gray matter. When mice smelled a “go” stimulus, a licking response produced water as a reward. The animals were highly motivated to perform this simple task as the experimenters kept them in a water-restricted state. Water-predicting sensory stimuli (the “go” odor) elicited activity that rapidly spread throughout the brain of thirsty animals. The wave of activity began in olfactory regions and was disseminated within ~300 ms to neurons in every one of the 34 regions they recorded from! Such propagation of information triggered by the “go” stimulus was not detected in animals allowed to freely consume water. Thus, the initial water-predicting stimulus initiates a cascade of firing throughout the brain only when the animal is in the right state – thirsty.

In another breakthrough study, Kenneth Harris, Mateo Carandini and colleagues[4] used calcium imaging techniques to record from more than 10,000 neurons in the visual cortex of the mouse. At the same time, facial movements were recorded in minute detail. They found that information in visual cortex neurons reflects more than a dozen features of motor information (related to facial movements, including whiskers and other facial features), in line with emerging evidence. These results are remarkable because traditional thinking is that motor and visual signals are only merged later in so-called “higher-order” cortical areas; definitely not in primary visual cortex. But the surprises didn’t stop there. The researchers also recorded signals across the forebrain, including other cortical areas, as well as subcortical regions. Surprisingly, information about the animal’s behavior (at least as conveyed by motor actions visible on the mouse’s face) was observed nearly everywhere they recorded. In considering the benefit of this ubiquitous mixing of sensory and motor information, the investigators ventured that effective behaviors depend on the combination of sensory data, ongoing motor actions, and internal variables such as motivational drives. This seems to be happening pretty much everywhere in the brain, including in primary sensory cortex. The examples above hint that much is to change in neuroscience in the coming decades. And these results come from fairly constrained settings. The amygdala study used a 40 x 40 x 40 plastic box; the thirst study probed mice with their heads fixed in placed; and the facial movement study employed an “air-floating ball” that allowed mice to “run”. Imagine what we’ll discover in the future.


[1] Recordings in the basolateral amygdala. Amir, A., Kyriazi, P., Lee, S. C., Headley, D. B., & Pare, D. (2019). Basolateral amygdala neurons are activated during threat expectation. Journal of Neurophysiology, 121(5), 1761-1777.

[2] Recordings in the basal amygdala: Gründemann, J., Bitterman, Y., Lu, T., Krabbe, S., Grewe, B. F., Schnitzer, M. J., & Lüthi, A. (2019). Amygdala ensembles encode behavioral states. Science, 364(6437), eaav8736.

[3] Allen, W. E., Chen, M. Z., Pichamoorthy, N., Tien, R. H., Pachitariu, M., Luo, L., & Deisseroth, K. (2019). Thirst regulates motivated behavior through modulation of brainwide neural population dynamics. Science, 364(6437), 253-253.

[4] Stringer, C., Pachitariu, M., Steinmetz, N., Reddy, C. B., Carandini, M., & Harris, K. D. (2019). Spontaneous behaviors drive multidimensional, brainwide activity. Science, 364(6437), 255-255.

Fitting behavior inside a 40x40x40 cm box

The central question in neuroscience is to understand the physical basis of behavior. But what kinds of behavior can be studied in a lab? Mice and rats can be placed in chambers and mazes to perform tasks. One can then study the effects of lesions on behavior. But if cell recordings are performed the constraints are much more severe. Until just a few years ago, this required a fair amount of cabling to link the brain to signal amplifiers and other electronics. Experiments in primates are performed in a “monkey chair” that keeps the animal’s body and head in place. Humans, of course, are studied inside MRI tubes that are anything but organic. With the technology available, getting closer to natural behaviors has simply not been possible.

Skinner's Box and Video Games: How to Create Addictive ...
From: https://levelskip.com/how-to/Skinners-Box-and-Video-Games

A type of behavior that fits inside a 40x40x40 cm box is classical conditioning.  Indeed, it has been extensively studied by psychologists since the early twentieth century, and for those interested in studying the biological mechanisms of fear, the paradigm has been a godsend. It has offered a window into this process, while allowing careful control over experimental variables, a fundamental consideration in experimental science. With the paradigm, the neuroscience of fear has been one of the most active areas of inquiry.

The fixation with this paradigm has also incurred nontrivial costs, leading to led to a type of tunnel vision[1]. As Dennis Pare and Gregory Quirk, very prominent “fear” researchers themselves, state:

When a rat is presented with only one threatening stimulus in a testing box that allows for a single reflexive behavioral response, one is bound to find exactly what the experimental situation allows: neuronal responses that appear tightly linked to the CS and seem to obligatorily elicit the conditioned behavior. Paré, D., & Quirk, G. J. (2017, p. 6)

The very success of the approach has led to shortsightedness.

Placed inside a small, enclosed chamber the animal is limited to a sole response: upon detecting the CS, it ceases all overt behavior and freezes in place. It can’t consider other options, such as dashing to a corner to escape; it cannot try to attack the source of threat either, as there isn’t another animal around – the shock comes out of nowhere! Now, when researchers study the rat’s brain under such conditions, a close relationship between brain and behavior is established. But as Pare and Quirk warn, the tight link might be apparent insofar as it would not hold under more general conditions. Neuroscience is experiencing a methodological renaissance. Advances in chemistry and genetics allow precision in targeting regions and circuits in a way that would have sounded like science fiction a decade ago. But if we continue using the paradigms that have been the mainstay of field, we will be cornering ourselves into a scientific cul-de-sac[2]. It’s time to think outside the box.


[1] Text here builds directly from Paré, D., & Quirk, G. J. (2017). When scientific paradigms lead to tunnel vision: lessons from the study of fear. npj Science of Learning, 2(1), 6.

[2] “Cul-de-sac” expression inspired by Kim and Jung (2018).

Evolution and the brain: what is novel?

The geneticist Theodosius Dobhansky famously stated that in biology nothing makes sense unless it’s in light of evolution. The same applies to neuroscience, a biological science. But evolution poses a conundrum. Vertebrates have been evolving for over 500 million years[1]. A telencephalon, a midbrain, and a hindbrain are part of the general plan of their nervous system. Structures like the amygdala and the striatum are found in animals as diverse as a salmon, a crow, and a baboon. Thus, many parts of the brain are “conserved”. But, then, what is novel? Something must be new after all.

From Pessoa, L., Medina, L., Hof, P. R., & Desfilis, E. (2019). Neural architecture of the vertebrate brain: Implications for the interaction between emotion and cognition. Neuroscience & Biobehavioral Reviews, 107, 296-312.

In chapter 9, we described how homology refers to relationships between traits that are shared as a result of common ancestry. The leaves of plants provide a good example[2]. The leaves of a pitcher plant, Venus flytrap, poinsettia, and a cactus look nothing alike, and in fact have distinct functions. In the pitcher plant, the leaves are modified into pitchers to catch insects; in the Venus flytrap they are modified into jaws to catch insects; in the poinsettia bright red leaves resemble flower petals and attract insects and pollinators; cactus’ leaves have become modified into spines, which reduce water loss and can protect the plants from herbivores. Yet, the four are homologous given that they derive from a common ancestor.

A structure adopts new functions during evolution, while its ancestry can be traced to something more fundamental[3]. Take the hippocampus of rodents, monkeys, and humans. There is copious evidence indicating that the area is homologous in the three species, that is, that is a conserved structure. But does this mean that it performs the same function(s) in these species? Does it perform some qualitatively different function(s) in humans, for example? To many neuroscientists this sounds implausible. However, the possibility need not be any more radical than saying that the forelimb does something qualitatively different in birds compared to turtles, say. If common ancestry precluded new functions, no species could ever take flight!

The ongoing discussion is particularly pertinent when we think of emotion and motivation, because researchers invoke “old” structures when studying these mental phenomena. Regions like the amygdala at the base of the forebrain and the periaqueductal gray in the midbrain in the case of emotion; the accumbens (part of the striatum) also at the base of the forebrain and the ventral tegmental area in the midbrain in the case of motivation. Because these regions are deeply conserved across vertebrates, they function in a similar way, or so the reasoning goes. If we entertain these areas in rodents, monkeys, and humans, closer as they are evolutionarily, the expectation would be that they work in largely the same manner. But rodents and primates diverged more than 70 million years ago. Are we to suppose that no qualitative differences have emerged? This seems rather implausible. (In Chapter 9, we briefly reviewed some of the differences in the amygdala of rats, monkeys, and humans.)

The argument made in this book is that we should conceptualize evolution in terms of the reorganization of larger-scale connectional systems. Instead of more cortex sitting atop the subcortex in primates relative to rodents – which presumably allows the “rational” cortex to control the “irrational” subcortex – more varied ways of interactions are possible, supporting more mental latitude.

The brain doesn’t fossilize. Unfortunately, with time, it disintegrates, leaving no trace. So we simply don’t have a way to know exactly what the brain of a common ancestor looked like. Without fossil remains, scientists tend to think of the brain of a common ancestor of rodents, primates, and humans as something like the current brain of a mouse, as this animal is the “most primitive” one. But a mouse encountered today has had 70 million years to evolve from the ancestor in question, and thus specialize to the particular niches it inhabits now.

Evolution is as much about what’s preserved as what’s new. Ever since science was transformed by the independent work of Charles Darwin and Alfred Russell Wallace in the late 1850s, biologists have sought to determine “uniquely human” characteristics. This has led to a near-obsession to identify one-of-a-kind nervous system features, from putative exclusively human brain regions to cell types. The cortex, in particular, has attracted much attention. We described in Chapter 9 how much of the pallium of mammals is structured in a layered fashion, a quality that is not observed in other vertebrates. Well, not exactly, as some reptiles (such as turtles) have a dorsal pallium that is cortex-like, with three bands of cells. Mammals, however, have parts of the cortex that is much more finely layered, with six well-defined zones. In fact, six-layered cortex is often referred to as “neocortex”, with the “neo” part highlighting its sui generis property (in the book, the more neutral terminology “isocortex” was adopted for this type of cortex).

I offer that the concept of reorganization of circuits is a much more promising idea. That is to say, what is unique about humans is the same that is unique about mice, or any other species: their circuits are wired in ways that support survival of the species. This is not to deny that some more punctate differences play a role. But whatever the differences are, at least considering primates with larger body sizes, they are not staring us in the face – they are subtle. For example, all primates exhibit an isocortex that is massively expanded[4]. Primates also have prefrontal cortices with multiple parts, including the lateral component, which neuroscientists often link to “rational” capabilities. More generally, direct evidence for human-specific cortical areas is scant[5].

Let’s go back to Dobhansky’s call to consider biology in light of evolution – always. Biologists would vehemently agree. But evolution is so egregiously complex that the suggestion doesn’t help as much as one would think. Verily, what we observe in practice is that neuroscientists who don’t specialize in studying brain evolution are time and again cavalier, if not outright naïve, about how they apply and think of evolution. By doing so, our explanations run the risk of becoming just-so stories[6].


[1] For a framework on vertebrate evolution, see Pessoa, L., Medina, L., Hof, P. R., & Desfilis, E. (2019). Neural architecture of the vertebrate brain: Implications for the interaction between emotion and cognition. Neuroscience & Biobehavioral Reviews, 107, 296-312.

[2] https://evolution.berkeley.edu/evolibrary/article/0_0_0/lines_04

[3] Sentence closely borrowed from Murray et al. (2016): “a structure adopts new functions during evolution, yet its ancestry can be traced to something more fundamental”. Discussion of the hippocampus until end of the paragraph also from them.

[4] Striedter (2005).

[5] Striedter (2005).

[6] The Wikipedia page on just-so-stories is actually pretty decent: https://en.wikipedia.org/wiki/Just-so_story.

P values and scientific communication: a (very) small step

Nearly everyone recognizes the shortcomings of p values and the associated null hypothesis significance testing framework. Of course, much has been written about it, including potential alternatives. While change is needed, it is hard — and it will take time.

At present, I would advocate that the scientific community adopt the suggestion by John Carlin to remove the s-word:

  • “Eliminate the term ‘statistical significance’ from the scientific discourse”

In the past few years, I have been using something similar, but I think a stronger stance is needed by all of us (see also Carlin’s suggestions). In the research from my lab, we use more and more expressions such as “an interaction effect was detected (F value, p value)”.

As important, I suggest avoiding (like the plague) sentences saying that “there was no difference between conditions 1 and 2” — not to mention statements to the effect that conditions 1 and 2 are equivalent. A much better way of expressing this is to say that differences between conditions 1 and 2 were not detected (t value, p value)”.

This is a tiny step. By itself it won’t do much, but in conjunction with educating a new generation of researchers in alternative methods, it will help start changing scientific reporting — an hopefully improve science.

Brain structure and its origins

I just finished reading most of this wonderful new book by Gerald Schneider from MIT (who was already describing two visual systems in 1969): Brain Structure and Its Origins: in Development and in Evolution of Behavior and the Mind.

9780262026734Comparative neuroscience is seldom ever touched in standard textbooks, which is really unfortunate because so much of how we think about the brain has to do with some “folk neuroscience” ways of thinking. This can hardly be ignored when we start studying emotion, motivation, reward, all those good things that gained so much traction in neuroscience in the past two decades. And now are mainstream.

No ones’ work, or book, is perfect of course. My main problem — perhaps not surprisingly — with the book is its treatment of the “limbic system”. Although it is grounded in comparative neuroanatomy and thus much better than other treatments of the purported system, the treatment is problematic for lots of reasons. The presentation is miles better than what would be found in a medical neuroscience textbook, but discussing an emotion circuit of Papez is just unfortunate.

But otherwise, what a great book. I wish I had learned about the brain from a book like this, where was it all along?!

Reward prediction error and fMRI: do they go together?

New paper: Bissonette GB, Gentry RN, Padmala S, Pessoa L, Roesch MR. Impact of appetitive and aversive outcomes on brain responses: linking the animal and human literatures. Front Syst Neurosci. 2014 Mar 4;8:24. eCollection 2014.

With Matt Roesch and folks in his lab, we have a new paper reviewing the animal and animal literatures on appetitive and aversive processing. But what I want to discuss here is one issue that became apparent when we started collecting some pilot fMRI data. Whereas work in the rat has shown prediction errors in a few places in the brain, some fMRI work has shown quite widespread signals. This might be true but one possible confound that we find in many papers in the literature is that regions that simply respond to the US stimulus could be incorrectly described as showing prediction errors because of the way the fMRI analysis is done. This is explained in Fig. 6 of our paper and in the figure below (thanks to Brenton McMenamin).

The problem is that when a regressor modeling the US (say, reward) is also introduced in the model in addition to the prediction error regressor, it can absorb variance in a way that what is left is actually how a prediction error looks like. In that way, the region will appear to show a prediction error simply because it responds to reward itself.

I have no idea how prevalent this problem is, and it is at times even unclear how the data were modeled. (In fact, as an aside, it is at times very surprising how people don’t explain what they do in their papers.) Some people have talked about this problem but again I’m not sure how much people are taking this into account.

fig6_v2 (1)

Understanding brain networks and brain organization: new paper

New paper to appear in Physics of Life that will come with commentary (including by Dani Bassett, Barry Horwitz, Claus Hilgetag, Vince Calhoun, Michael Anderson, Evan Thompson, Franco Cauda, among others).

A lot has been written about brain networks, especially say after 2005. In Chapter 8 of my book The Cognitive Emotional Brain I wrote about this from the perspective of understanding structure-function mappings (what do regions do? what do networks do?). In a paper in press in Physics of Life, I update some of my evolving thoughts on this question. Some of the newer points are:

  • Is brain architecture really small world?  Cortical connectivity seems too dense.  But an important ingredient of small-world organization — the existence of non-local connections, especially long-range ones – is clearly present. Although they appear to be relatively weak, long-range connections play a major role in the cortical network.
  • The mapping from network (as a set of regions) to function is not one-to-one. For instance: Menon, Uddin, and colleagues suggest that a salience network involving the anterior insula and the anterior cingulate cortex “mediates attention to the external and internal worlds”. They note, however, that “to determine whether this network indeed specifically performs this function will require testing and validation of a sequence of putative network mechanisms…” I argue that a network’s operation will depend on several more global variables, namely an extended context that includes the state of several “neurotransmitter systems”, arousal, slow wave potentials, etc. In other words, a network that is solely defined as a “collection of regions” is insufficient to eliminate the one-to-many problem observed with brain regions (such as the amygdala being involved in several functions).
  • Cortical myopia (echoing points by Parvizi, 2009). Large-scale analyses and descriptions of brain architecture suggest principles of organization that become apparent when information is combined across many individual studies. Unfortunately, most of these “meta” studies are cortico-centric – they pay little or no attention to subcortical connectivity. This paints a rather skewed view of brain organization. For example: if one considers “signal communication” as proposed by Sherman (see figure), cortical communication might go via thalamus (including pulvinar), flipping the traditional view.

    Scheme by Sherman SM. The thalamus is more than just a relay. Current Opinions in Neurobiology. 2007;17:417-22.

    Scheme by Sherman SM. The thalamus is more than just a relay. Current Opinions in Neurobiology. 2007;17:417-22.

  • Evolution. Related to the previous point, I suggest that to understand the contributions of subcortical connectivity, we need to consider the evolution of the brain. For example: a cortico-centric framework is one in which the “newer” cortex controls subcortical regions, which are typically assumed to be relatively unchanged throughout evolution. Instead, I suggest that cortex and subcortex change in a coordinated fashion.
  • The importance of weak connections. I critique a central component of the “standard” network view, which goes something like this: “network states depend on strong structural connections; conversely, weak connections have a relatively minor impact on brain states.” My contention is that weak connections are much more important.

 

Reference: Parvizi J. Corticocentric myopia: old bias in new cognitive sciences. Trends in Cognitive Sciences. 2009;13:354-9.

Dopamine: reward or a lot more? We knew it was a lot more…

I recently took the time to read this paper, something that I should have done a while back… Bromberg-Martin, Matsumoto, and Hikosaka (2010) provide this great perspective on the multi-dimensional nature of dopamine neurons and signaling. I’m not going to summarize it, the authors have done a better job . It’s worth reading the whole quote (and paper) (pp. 827-828; emphasis added):

Scheme by Bromberg, Matsumoto, and Hikosaka.

Scheme by Bromberg-Martin, Matsumoto, and Hikosaka (2010).

“An influential concept of midbrain DA [dopamine] neurons has been that they transmit a uniform motivational signal to all downstream structures. Here we have reviewed evidence that DA signals are more diverse than commonly thought. Rather than encoding a uniform signal, DA neurons come in multiple types that send distinct motivational messages about rewarding and nonrewarding events. Even single DA neurons do not appear to transmit single motivational signals. Instead, DA neurons transmit mixtures of multiple signals generated by distinct neural processes. Some reflect detailed predictions about rewarding and aversive experiences, while others reflect fast responses to events of high potential importance…

Many previous theories have attempted to identify DA neurons with a single motivational process such as seeking valued goals, engaging motivationally salient situations, or reacting to alerting changes in the environment. In our view, DA neurons receive signals related to all three of these processes. Yet rather than distilling these signals into a uniform message, we have proposed that DA neurons transmit these signals to distinct brain structures in order to support distinct neural systems for motivated cognition and behavior. Some DA neurons support brain systems that assign motivational value, promoting actions to seek rewarding events, avoid aversive events, and ensure that alerting events can be predicted and prepared for in advance. Other DA neurons support brain systems that are engaged by motivational salience, including orienting to detect potentially important events, cognitive processing to choose a response and to remember its consequences, and motivation to persist in pursuit of an optimal outcome. We hope that this proposal helps lead us to a more refined understanding of DA functions in the brain, in which DA neurons tailor their signals to support multiple neural networks with distinct roles in motivational control.”

Fantastic! Above is a picture of their scheme (Fig. 7).

Reference: Bromberg-Martin, E. S., Matsumoto, M., & Hikosaka, O. (2010). Dopamine in motivational control: rewarding, aversive, and alerting. Neuron, 68(5), 815-834.