Transient brain dynamics

Here, I illustrate simple ideas to understand multi-region dynamics. The goal is to describe the joint state of a set of brain regions, and how it evolves temporally.

Left: Time series data from 3 regions of interest. Right: State-space representation with temporal trajectories for tasks A and B.
From: Venkatesh, M., Jaja, J., & Pessoa, L. (2019). Brain dynamics and temporal trajectories during task and naturalistic processing. Neuroimage, 186, 410-423.

Imagine a system of n brain regions labeled each of which with an activation (or firing rate) strength that varies as a function of time denoted  and so on. We can group these activities into a vector. Recall that a vector is simply an ordered set of values, such as x, y, and z in three dimensions. At time t1, the vector  specifies the state of the regions (that is, their activations) at time t1. By plotting how this vector moves as a function of time, it is possible to visualize the temporal evolution of the system as the behavior in question unfolds[1]. We can call the succession of states at t1, t2, etc., visited by the system a trajectory. Now, suppose an animal performs two tasks, A and B, and that we collect responses across three brain regions, at multiple time points. We can then generate a trajectory for each task (Figure 1). Each trajectory provides a potentially unique signature for the task in question[2]. We’ve created a four-dimensional representation of each task: it considers three locations in space (the regions where the signals were recorded from) and one dimension of time. Thus, each task is summarized in terms of responses across multiple brain locations and time. Of course, we can record from more than three places, that only depends on what our measuring technique allows us to do. If we record from n spatial locations then we’ll be dealing with an n+1 dimensional situation (the +1 comes from adding the dimension of time). Whereas we can’t plot that in a piece of paper, fortunately the mathematics is the same, so this poses no problems the for data analysis.

Thinking in terms of spatiotemporal trajectories brings with it multiple features. The object of interest – the trajectory – is spatially distributed and, of course, dynamic. It also encourages a process-oriented framework, instead of trying to figure out how a brain region responds to a certain stimulus. The process view also changes the typical focus on billiard-ball causation – the white ball hits the black ball, or region A excites region B. Experimentally, a central goal then becomes estimating trajectories robustly from available data. Some readers may feel that, yes, trajectories are fine, but aren’t we merely describing the system but not explaining it? Why is the trajectory of task A different from that of task B, for example? Without a doubt, a trajectory is not the be-all and end-all of the story. Deciphering how it comes about is ultimately the goal, which will require more elaborate models, and here computational models of brain function will be key. In other words, what kind of system, and what kind of interactions among system elements generate similar trajectories, given similar inputs and conditions?


[1] Rabinovich et al. (2008); Buonomano and Maass (2009).

[2] The proximity of trajectories depends on the dimensionality of the system in question (which is usually unknown) and the dimensionality of the space where data are being considered (say, after dimensionality reduction). Naturally, points projected onto a lower-dimensional representation might be closer than in the original higher-dimensional space.

Causation in the brain: Do complex systems play a role?

Nowhere else is the challenge of embracing complex systems greater than when confronting the problem of causation. “What causes what” is the central problem in science, at the very core of the scientific enterprise.

Diagrams of causal frameworks. (A) Simple billiard ball scheme of
causation. (B) The two balls are connected by a spring, and the goal of explanation is not to clarify where ball 2 ends up. Instead, when the initial force is applied to ball 1, the goal is to understand the evolution of the ball1—ball2 system as evolves temporally. (C) More generally, a series of springs with different coupling properties links the multiple elements in the system, which will exhibit considerably more complex dynamics.
From Pessoa, L. (2017). Cognitive-motivational interactions: Beyond boxes-and-arrows models of the mind-brain. Motivation Science, 3(3), 287.

One of the missions of neuroscience is to uncover the nature of signals in different parts of the brain, and ultimately what causes them. A type of reasoning that is prevalent is what I’ve called the billiard ball model of causation[1]. In this Newtonian scheme, force applied to a ball leads to its movement on the table until it hits the target ball. The reason the target ball moves is obvious; the first ball hits it, and via the force applied to it, it moves. Translated into neural jargon, we can rephrase it as follows: a signal external to a brain region excites neurons, which excite or inhibit neurons in another brain region via anatomical pathways connecting them. But this way of thinking, which has been very productive in the history of science, is too impoverished when complex systems – the brain for one – are considered.

We can highlight two properties of the brain that immediately pose problems for standard, Newtonian causation[2]. First, anatomical connections are frequently bidirectional, so physiological influences go both ways, from A to B and back. If one element causally influences another while the second simultaneously causally influences the first, the basic concept breaks down. Situations like this have prompted philosophers to invoke the idea of “mutual causality”[3]. For example, consider two boards arranged in a Λ shape so that their tops are leaning against each other; so, each board is holding the other one up. Second, convergence of anatomical projections implies that multiple regions concurrently influence a single receiving node, making the attribution of unitary causal influences problematic.

If the two properties above already present problems, what are we to make of the extensive cortical-subcortical anatomical connectional systems and, indeed, the massive combinatorial anatomical connectivity discussed in Chapter 9? If, as proposed, the brain basis of behavior involves distributed, large-scale cortical-subcortical networks, new ways of thinking about causation are called for. The upshot is that Newtonian causality provides an extremely poor candidate for explanation in non-isolable systems like the brain.

What are other ways of thinking about systems? To move away from individual entities (like billiard balls), we can consider the temporal evolution of “multi-particle systems”, such as the motion of celestial bodies in a gravitational field. Physicists and mathematicians have studied this problem for centuries, which was central in Newtonian physics. For example, what types of trajectories do two bodies, such as the earth and the sun, exhibit? This so-called two-body problem was solved by Johann Bernoulli in 1734. But what if we’re interested in three bodies, say we add the moon to the mix? The answer will be surprising to readers who think the problem “should be easy”. On the contrary, this problem has vexed mathematicians for centuries, and in fact cannot be solved! At least not in the sense that the two-body, because it doesn’t admit to a general mathematical solution.

So, what can be done? Instead of analytically solving the problem, one can employ the laws of motion based on gravity and use computer simulations to determine future paths[4]. For example, if we know the position of three planets at a given time, we can try to determine their positions in the near future by applying equations that explicitly calculate all intermediate positions. In the case of the brain, where we don’t have comparable equations, we can’t do the same. But we can extract a useful lesson, and think of the joint state of multiple parts of the brain at a given time. How does this state, which can be summarized by the activity level of brain regions, change with time?

Before describing some of these ideas further, I’ll propose one more reason thinking in terms of dynamics is useful. For that, we need to go back in time a little.


[1] Pessoa (2017, 2018). 2017: Motivation Science; 2018: CONB.

[2] Mannino and Bressler (2015).

[3] Hausman (1984); Frankel (1986).

[4] Computational investigations in the past years have revealed a large number of families of periodic orbits (Šuvakov and Dmitrašinović, 2013).

Do we need complexity to understand the brain?

Throughout the book, we’ve been describing a systems view of brain function that encourages reasoning in terms of large-scale, distributed circuits. But when we adopt this stance, pretty early on it becomes clear that “things are complicated”. Indeed, we might be asked if we need to entangle things that much. Shouldn’t we attempt simpler approaches and basic explanations first? After all, an important principle in science is parsimony, discussed in reference to what’s called Occam’s razor after the Franciscan friar William of Ockham’s dictum that pluralitas non est ponenda sine necessitate: “plurality should not be posited without necessity.” In other words, keep it simple, or as Einstein is often quoted, “everything should be made as simple as possible, but no simpler.”

Chaos | Chapter 7 : Strange Attractors - The butterfly ...
Lorenz strange attractor (from https://i.ytimg.com/vi/aAJkLh76QnM/maxresdefault.jpg)

This idea makes sense, of course. Consider a theory T proposed to explain a set of phenomena. Now suppose that an exception to T is described, say, a new experimental observation that is inconsistent with it. While not good for proponents of T, the finding need not be the theory’s death knell. One possibility is to extend T so that it can handle the exception, thereby avoiding the theory from being falsified. More generally, when T fails, find a way to extend the theory so that the piece of data that is not handled by it now is. As T breaks down further and further it could be gradually extended to explain the additional observations with a series of, possibly, ad hoc extensions. That’s clearly undesirable. At some point, the theory in question is so bloated that simpler explanations would be heavily favored in comparison.

Whereas parsimony is abundantly reasonable as a general approach, what counts as “parsimonious” isn’t exactly clear. In fact, that’s where the rubber hits the road. Take an example from physics. In 1978, an American astronomer, Vera Rubin[1], noted that stars in the outskirts of galaxies were rotating too fast, contradicting what would be predicted by our theory of gravity. It was as if the mass observed in the universe was not enough to keep the galaxies in check, triggering a vigorous search for potential sources of mass not previously accounted for. Perhaps the mass of black holes that had not been tallied properly? Other heavy objects such as neutron stars? But when everything known was addade up, the discrepancy was – and still is – huge. Actually, five times more mass would be needed than what we have been able to put on the scale.

To explain the puzzle, physicists postulated the concept of dark matter, a type of matter previously unknown and possibly made of as-yet undiscovered subatomic particles. This would account for a whapping 85% of the mass of the universe We see that to solve the problem, physicists left the standard theory of gravitation (let’s call it G) unmodified but had to postulate an entirely new type of matter (call it m). That’s not a small change! In 1983, the Israeli physicist Mordechai Milgrom proposed a different solution, a relatively small change to G. In his “modified gravity” theory, gravity works as usual except when considering such massive systems as entire galactic systems. Without getting into the details here, the change essentially involved adding a new constant, called a0, to the standard theory of gravity[2].

So, our old friend G doesn’t work. We can either consider {G + m} or {G + a0} as potential solutions. Physicists have not been kind to the latter, and considered it rather ad hoc. In contrast, they have embraced the former and devoted monumental efforts to find new kinds of matter that can tip the scales in the right direction. At present the mystery is unsolved and larger and more precise instruments continue to be developed in the hope of cracking the problem. The point of this brief incursion into physics was not to delve into the details of the dispute, but to illustrate that parsimony is easier said than done. What is considered frugal in theoretical terms depends very much on the intellectual mindset of a community of scientists. And, human as they are, they disagree.

Another example that speaks to parsimony relates to autonomic brain functions that keep the body alive – for example, regulating food and liquid intake, respiration, heart activity, and the like. The anatomical interconnectivity of this system has posed major challenges to deciphering how particular functions are implemented. One possibility, along the lines proposed in this book, is that multi-region interactions collectively determine how autonomic processes work. But consider an alternative position. In an influential review, the Clifford Saper[3] stated that “although network properties of a system are a convenient explanation for complex responses, they tell us little about how they actually work, and the concept tends to stifle exploration for more parsimonious explanations.” According to him, the “highly interconnected nature of the central autonomic control system has for many years served as an impediment to assigning responsibility for specific autonomic patterns” to particular groups of neurons.

It’s not a stretch to say that thinking in terms of complex systems is not entirely natural to most biologists. One could say that they in fact view it with a non-trivial amount of suspicion, as if this approach overcomplicates things. Their training emphasizes other skills after all. Unfortunately, if left unchecked, the drive toward simple explanations can lead researchers to adopt distorted views of biological phenomena, as when proposing that a “schizophrenia gene” explains this devastating condition, or that a “social cognition brain area” allows humans, and possibly other primates, to have behavioral capabilities not seen in other animals. Fortunately, neuroscience is gradually changing to reflect a more interactionist view of the brain. From the earlier goal of studying how regions work, current research takes the challenge of deciphering how circuits work to heart.

A key aspect of the scientific enterprise is conceptual. Scientists decide the important questions that should be studied by accepting or rejecting papers in the top journals, funding particular research projects, and so on. Many of these judgements are passed from “outside of science”, in the sense that they are not inherent to the data collected by scientists. How one studies natural phenomena is based on accepted approaches and methods of practicing researchers. Accordingly, the position to embrace or shun complex systems is a collective viewpoint. To some, network-based explanations are too unwieldly and lacking in parsimony. In diametrical contrast, explanations heavily focused on localized circuits can be deemed as oversimplistic reductionism, or purely naive.

In the end, science is driven by data, and the evidence available puts pressure on the approaches adopted. Whereas mapping the human genome took a decade and three billion dollars at first, it can be done routinely now under a thousand dollars in less than two days, enabling unprecedented views of genetics. In the case of the brain, it’s now possible to record thousands of neurons simultaneously, opening a window into how large groups of neurons are involved in behaviors in ways that weren’t possible before. (Remember that most of what we know about neurophysiology relied on recordings of individual neurons or very small sets of cells at a time.) For example, in a trailblazing study, Engert and collaborators recorded single neurons across the entire brain of a small zebrafish[4]. Their goal was to record not most but all the creature’s neurons! Perhaps one day in the not-so-distant future the same will be possible for larger animals.


[1] http://www.astronoo.com/en/articles/mond-theory.html

[2] http://www.scholarpedia.org/article/The_MOND_paradigm_of_modified_dynamics

[3] Saper, C. B. (2002, p. 460).

[4] Zebrafish during the larval stage (Ahrens et al., 2012).