The Brain Network

Chapter 1: From one area at a time to networked systems

We begin our journey into the how the brain brings about the mind: our perceptions, actions, thoughts, and feelings. Historically, the study of the brain has proceeded in a divide-and-conquer way, trying to figure out the function of individual areas – chunks of gray matter that contain neurons in either the cortex or subcortex –, one at a time. The book makes the case that, as the brain is not a modular system, we need conceptual tools that can help us decipher how highly networked, complex systems function.

In 2016, a group of investigators published a map of the major subdivisions of the human cerebral cortex — the outer part of the brain — in the prestigious journal Nature (Figure 1.1). The partition delineated 180 areas in each hemisphere (360 in total), each of which representing a unit of “architecture, function, and connectivity.[1]” Many researchers celebrated the new result highlighting the long-overdue need to replace the de facto standard called the “Brodmann map.” Published in 1908 by Korbidian Brodmann, the map describes approximately 50 areas in each hemisphere (100 in total) based on local features, such as cell type and density, that Brodmann discovered under the microscope.

Figure 1.1. Map of brain areas of the cortex published in with the hope of replacing the standard Brodmann map of 1908. In the 2016 map, each hemisphere (or half of the brain) contains 180 areas. Figure from Glasser et al. (2016).

Notwithstanding the need to move past a standard created prior to the First World War, the 2016 cartographic description builds on an idea that was central to prior efforts: brain tissue should be understood in terms of a set of well-defined, spatially delimited sectors. Thus the concept of a brain area or region[2]: a unit that is both anatomically and functionally meaningful. The notion of an area/region is at the core of neuroscience as a discipline, with its central challenge of unravelling how behaviors originate from cellular matter. Put another way, how does function (manifested externally via behaviors) relate to structure (such as different neuron types and their arrangement)? How do groups of neurons – the defining cell type of the brain – lead to sensations and actions?

As a large and heterogeneous collection of neurons and other cell types, the central nervous system – with all of its cortical and subcortical parts – is a formidably complex organ. (The cortex is the outer surface with grooves and bulges; the subcortex comprises other cell masses that sit underneath. We’ll go over the basics of brain anatomy in Chapter 2.) To unravel how it works, some strategy of divide and conquer seems to be necessary. How else can it be understood without breaking it down into subcomponents? But this approach also exposes a seemingly insurmountable chicken-and-egg problem: if we don’t know how it works, how can we determine the “right” way to subdivide it? Finding the proper unit of function, then, has been at the center of the quest to crack the mind-brain problem.

Historically, two winners in the search for rightful units have been the neuron and the individual brain area. At the cellular level, the neuron reigns supreme. Since the work of Ramon y Cajal[3], the Spanish scientific giant who helped establish neuroscience as an independent discipline, the main cell type of the brain is considered to be the neuron (which come in many varieties both in terms of morphology and physiology). These cells communicate with one another via electrochemical signaling. If they are sufficiently excited by other neurons, their equilibrium voltage changes they generate a “spike”: an electrical signal that propagates along the neuron’s thin extensions (called axons), much like a current flowing through a wire. The spike from a neuron can then influence downstream neurons. And so on.

At the supra-cellular level, the chief unit is the area. But what constitutes an area? Dissection techniques and the study of neuroanatomy during the European Renaissance were propelled to another level by Thomas Willis’s monumental Cerebri anatome published in 1664. The book rendered in exquisite detail the morphology of the human brain, including detailed drawings of subcortical structures and the cerebral hemispheres containing the cortex. For example, Willis described a major structure of the subcortex, the striatum, that we’ll discuss at length in the chapters to follow. With time, as anatomical methods improved with more powerful microscopes and diverse stains (which mark the presence of chemical compounds in the cellular milieu), more and more subcortical areas were discovered. In 1819, the German anatomist Karl Burdach described a mass of gray matter that could be seen in slices through the temporal lobe. He called the structure the “amygdala” — given that it’s shaped like an almond[4] (“amygdala” means almond in Latin) –, now famous for its contributions to fear processes. And techniques developed in the second half of the 20th century revealed that it’s possible to delineate a least a dozen subregions within its overall territory.

The seemingly benign question — what counts as an area? – is far from straightforward. For instance, is the amygdala one region or twelve? This region is far from an esoteric case. All subcortical areas have multiple subdivisions, and some have boundaries that are more like fuzzy zones than clearly defined lines. The challenges of partitioning the cortex, the outer laminated mantle of the cerebrum, are enormous too. That’s where the work of Brodmann and others, and more recently the research that led to the 180-area parcellation (Figure 1.1), comes in. It introduces a set of criteria to subdivide the cortex into constituent parts. For example, although neurons in the cortex are arranged in a layered fashion, the number of cell layers can vary. Therefore, identifying a transition between two cortical sectors is aided by differences in cell density and layering.

How modular is the brain?

When subdividing a larger system – one composed of lots of parts – the concept of modularity comes into play. Broadly speaking, it refers to the degree of interdependence of the many parts that comprise the system of interest. On the one hand, a decomposable system is one in which each subsystem operates according to its own intrinsic principles, independently of the others – we say that this system is highly modular. On the other hand, a nondecomposable system is one in which the connectivity and inter-relatedness of the parts is such that they are no longer clearly separable. Whereas the two extremes serve as useful anchors to orient our thinking, in practice one finds a continuum of possible organizations, so it’s more useful to think of the degree of modularity of a system.

Science as a discipline is inextricably associated with understanding entities in terms of a set of constituent subparts. Neuroscience has struggled with this modus operandi since its early days, and debates about “localizationism” versus “connectionism” – how local or how interconnected brain mechanisms are – have always been at the core of the discipline. By-and-large a fairly modular view has prevailed in neuroscience. Fueled by a reductionistic drive that has served science well, most investigators have formulated the study of the brain as a problem of dissecting the multitude of “suborgans” that make it up. To be true, brain parts are not viewed as isolated islands, and are understood to communicate with one another. But, commonly, the plan of attack assumes that the nervous system is decomposable[5] in a meaningful way in terms of patches of tissue (as in Figure 1.1) that perform well-defined computations. If we can only determine what these are.   

There have been proposals of non-modular processing, too. The most famous example is that of Karl Lahsley who, starting in the 1930s, defended the idea of “cortical equipotentiality,” namely that most of the cortex functions jointly, as a unit. Thus, the extent of a behavioral deficit caused by a lesion depended on the amount of cortex that was compromised – small lesions cause small deficits, large lesions cause larger ones. Although Lashley’s proposal was clearly too extreme and rejected empirically, historically, many ideas of decentralized processing have been entertained by neuroscientists. Let’s discuss some of their origins.

The networked brain

The field of artificial intelligence (AI) is said to have been born at a workshop at Dartmouth College in 1956. Early AI focused on the development of computer algorithms that could emulate human-like “intelligence,” including simple forms of problem solving, planning, knowledge representation, and language understanding. A parallel and competing approach – what was to become the field of artificial neural networks, or neural networks, for short – took its inspiration instead from natural intelligence, and adopted basic principles of the biology of nervous systems. In this non-algorithmic framework, collections of simple processing elements work together to execute a task. An early example was the problem of pattern recognition, such as recognizing sequences of 0s and 1s. A more intuitive, modern application addresses the goal of image classification. Given a set of pictures coded as a collection of pixel intensities, the task is to generate an output that signals a property of interest; say, output “1” if the picture contains a face, “0” otherwise. The underlying idea behind artificial neural networks was that “intelligent” behaviors result from the joint operation of simple processing elements, like artificial neurons that sum their inputs and generate an output if the sum exceeds a certain threshold value. We’ll discuss neural networks again in Chapter 8, but here we emphasize their conceptual orientation: thinking of a system in terms of collective computation.

The 1940s and 1950s were also a time when, perhaps for the first time, scientists started systematically developing theories of systems generally conceived. The intellectual cybernetics movement was centrally concerned with how systems regulate themselves so as to remain within stable regimes; for example, normal, awake human temperature remains within a narrow range, varying less than a degree Celsius. Systems theory, also called general systems theory or complex systems theory, tried to formalize how certain properties might originate from the interactions of multiple, and possibly simple, constituent parts. How does “wholeness” come about in a way that is not immediately explained by the properties of the parts?

Fast forward to 1998 when Duncan Watts and Steven Strogatz published a paper entitled “Collective dynamics of ‘small world’ networks”[6]. The study proposed that the organization of many biological, technological, and social networks gives them enhanced signal-propagation speed, computational power, and synchronization among parts. And these properties are possible even in systems where most elements are connected locally, with only some elements having “arbitrary” connections. (For example, consider a network of interlinked computers, such as the internet. Most computers are only connected to others in a fairly local manner; say, within a given department within a company or university. However, a few computers have connections to other computers that are geographically quite far.)

Watts and Strogatz applied their techniques to study the organization of a social network containing more than 200,000 actors. As we’ll discuss in Chapter 10, to make a “network” out of the information they had available, they considered two actors to be “connected” if they had appeared in a film together. Although a given actor was only connected to a small number of other performers (around 60), they discovered that it was possible to find short “paths” between any two actors. (The path A – B – C links actors A and C, which have not participated in the same film, if both of them have co-acted with actor B). Remarkably, on average, paths containing only four connections (such as the path A – B – C – D – E linking actors A and E) separated a given pair of actors picked at random from the set of 200,000. The investigators dubbed this property “small world” by analogy with the popularly known idea of “six degrees of separation”, and suggested that it is hallmark of many types of network – one can travel from A to Z very expediently.

The paper by Watts and Strogatz, and a related paper by Albert-László Barabási and Réka Albert that appeared the following year[7], set off an avalanche of studies on what has become known as “network science” – the study of interconnected systems comprised of more elementary components, such as a social network of individual persons. This research field has grown enormously since then, and novel techniques are actively being applied to social, biological, and technological problems to refine our view of “collective behaviors.” These ideas resonated with research in brain science, too, and it didn’t take long before investigators started applying networks techniques to study their data. This was particularly the case in human neuroimaging, which employs Magnetic Resonance Imaging (MRI) scanners to measure activity throughout the brain during varied experimental conditions. Network science provides a spectrum of analysis tools to tackle brain data. First and foremost, the framework encourages researchers to conceptualize the nervous system in terms of network-level properties. That is to say, whereas individual parts – brain areas or other such units – are important, collective or system-wide properties must be targeted.

Neuroscientific explanations

Neuroscience seeks to answer the following central question: How does the brain generate behavior?[8] Broadly speaking, there are three types of study: lesion, activity, and manipulation. Lesion studies capitalize on naturally occurring injuries, including due to tumors and vascular accidents; in non-human animals, precise lesions can be created surgically, thus allowing much better control over the affected territories. What types of behavior are impacted by such lesions? Perhaps patients can’t pay attention to visual information as they used to; or maybe they have difficulty moving a limb. Activity studies measure brain signals. The classical technique is to insert a microelectrode into the tissue of interest and measure electrical signals in the vicinity of neurons (it is also possible to measure signals inside a neuron itself, but such experiments are more technically challenging). Voltage changes provide an indication of a change in state of the neuron(s) closest to the electrode tip. And determining how such changes are tied to the behaviors performed by an animal provides clues about how they contribute to them. Manipulation studies directly alter the state of the brain by either silencing or enhancing signals. Again, the goal is to see how sensations and actions are affected.

Although neuroscience studies are incredibly diverse, one way to summarize them is as follows: “Area or circuit X is involved in behavior Y” (where a circuit is a group of areas). A lesion study might determine that patients with damage of the so-called cortex of the anterior insula have the ability to quit smoking easily, without relapse, leading to the conclusion that the insula is a critical substrate in the addiction to smoking[9]. Why? Quitting is hard in general, of course. But it turns out to be easy if one’s anterior insula is nonfunctional. It’s logical, therefore, to surmise that, when intact, this region’s operation somehow promotes addiction. An activation study using functional MRI might observe stronger signals in parts of the visual cortex when participants view pictures of faces compared to when they are shown many kinds of pictures that don’t contain faces (pictures of multiple types of chairs, shoes, etc.). This could prompt to the suggestion that this part of the visual cortex is important for the perception of faces. A manipulation study could enhance activity in prefrontal cortex in monkeys, and observe an improvement in tasks that require careful attention to visual information.

Many journals require “significance statements” in which authors summarize the importance of their studies to a broader audience. In the instances of the previous paragraph, the authors could say something like this: 1) the insula contributes to conscious drug urges and to decision-making processes that precipitate relapse; 2) the fusiform gyrus (the particular area of visual cortex that responds vigorously to faces) is involved in face perception; and 3) the prefrontal cortex enhances performance of behaviors that are challenging and require attention.

Figure 1.2. Because little is known about how brain mechanisms bring about behaviors, neuroscientists permeate their papers with “filler” verbs as listed above, most of which do not add substantive content to the statements made. Figure from Krakauer (2017).

The examples above weren’t gratuitous; all were important studies published in respected scientific journals[10]. Although these were rigorous experimental studies, they don’t quite inform about the underlying mechanisms[11]. In fact, if one combs the peer-reviewed literature, one finds a plethora of filler terms[12] – words like “contributes”, “involved”, and “enhances” above (Figure 1.2) – that stand in for the processes we presume did the “real” work. This is because, by and large, neuroscience studies don’t sufficiently determine, or even strongly constrain, the underlying mechanisms that link brain to behavior.

Scientists strive to discover the mechanisms supporting the phenomena they study. But what precisely is a mechanism? Borrowing from the philosopher William Bechtel, it can be defined as a ‘‘a structure performing a function in virtue of its parts, operations, and/or organization. The functioning of the mechanism is responsible for one or more phenomena’’[13]. Rather abstract, of course, but in essence how something happens. The more clear-cut we can be about it, the better. For example, in physics, precision actually involves mathematical equations. Note that mechanisms and explanations are always at some level of explanation. A typical explanation about combustion motors in automobiles will invoke pistons, fuel, controlled explosions, etc. It will not discuss these phenomena in term of particle physics, for instance; it won’t invoke electrons, protons, or neutrons.

We currently lack an understanding of most brain science phenomena. Therefore, when an experiment finds that changes occur in, say, the amygdala during classical aversive conditioning (learning that a once-innocuous stimulus is now predictive of a shock; see Chapter 5), we might find that cell responses there increase in parallel to the observed behavior – as the behavior is acquired, cells responses concomitantly increase. Although this is a very important finding, it remains relatively shallow in clarifying what’s going on. Of course, if via a series of studies we come to discern how amygdala activity increases, decreases, or stays the same when learning changes accordingly, we are closer to legitimately saying that we grasp the underlying mechanisms.  

Pleading ignorance

How much do we know about the brain today? In the media, there is no shortage of news about novel discoveries explaining why we are all stressed, overeat, or cannot stick to new-year commitments. General-audience books on brain and behavior are extremely popular, even if we don’t count the ever-ubiquitous self-help books, themselves loaded with purported insights from brain science. And judging from the size of graduate school textbooks (some of which are even hard to lift), current knowledge is a deep well.

In reality, we know rather little. What we’ve learned barely scratches the surface.

Consider, for example, a recent statement by an eminent neuroscientist: “Despite centuries of study of brain–behavior relationships, a clear formalization of the function of many brain regions, accounting for the engagement of the region in different behavioral functions, is lacking”[14]. A clear-headed description of our state of ignorance was given by Ralph Adolphs and David Anderson, both renowned professors at the California Institute of Technology, in their book The Neuroscience of Emotion:[15]

 We can predict whether a car is moving or not, and how fast it is moving, by ‘imaging’ its speedometer. That does not mean that we understand how an automobile works. It just means that we’ve found something that we can measure that is strongly correlated with an aspect of its function. Just as with the speedometer, imaging [measuring] activity in the amygdala (or anywhere else in the brain), in the absence of further knowledge, tell us nothing about the causal mechanism and only provides a ‘marker’ that may be correlated with an emotion.

Although these authors were discussing the state of knowledge regarding emotion and the brain, it’s fair to say that their summary applies to neuroscience more generally – the science of brain and behavior is still in its (very) early days.

The gap – no, gulf – between scientific knowledge and how it is portrayed by the general media is sizeable indeed. And not only a piece in a popular magazine in a medical office, but a serious article in, say, the New York Times or The Guardian, newspapers of some heft. The problem even extends to most science communication books, especially those with a more clinical or medical slant.

Mechanisms and complexity in biology

How does something work? As discussed above, science approaches this question by trying to work out mechanisms. We seek “machine-like” explanations, much like describing how an old, intricate clock functions. Consider a Rube Goldberg apparatus (for an example, see Figure 1.3), accompanied by directions on how to use it to turn a book page[16]

Figure 1.3. Rube Goldberg apparatus as an example of mechanical explanation. The text describes another example.

(1) Turn the handle on a toy cash register to open the drawer.

(2) The drawer pushes a golf ball off a platform, into a small blue funnel, and down a ramp.

(3) The falling golf ball pulls a string that releases the magic school bus (carrying a picture of Rube Goldberg) down a large blue ramp.

(4) Rube’s bus hits a rubber ball on a platform, dropping the ball into a large red funnel.

(5) The ball lands on a mousetrap (on the orange box) and sets it off.

(6) The mousetrap pulls a nail from the yellow stick.

(7) The nail allows a weight to drop.

(8) The weight pulls a cardboard ‘cork’ from an orange tube.

(9) This drops a ball into a cup.

(10) The cup tilts a metal scale and raises a wire.

(11) The wire releases a ball down a red ramp.

(12) The ball falls into a pink paper basket.

(13) The basket pulls a string to turn the page of the book.

The “explanation” above works because it provides a causal narrative: a series of cause-and-effect steps that slowly but surely lead to the outcome. Although this example is artificial of course (no one would turn a page like that), it epitomizes a style of explanation that is the gold standard of science.

Yet, biological phenomena frequently involve complex, tangled webs of explanatory factors[17]. Consider guppies, small fishes native to streams in South America, which show geographical variation in many kinds of traits, including color patterns. To explain the morphological and behavioral variation among guppies, the biologist John Endler suggested that we consider a “network of interactions” (Figure 1.4). The key point was not to focus on the details of the interactions, but the fact that they exist. Complex as it may look, Endler’s network is “simple” as far as biological systems go. It doesn’t involve bidirectional influences (double-headed arrows), that is, those in which A affects B and B affects A in turn (see Chapter 8). Still, most biological systems are organized like that.

Figure 1.4. Multiple explanatory factors that influence morphological and behavioral variation among South American guppies. Figure from Endler (1995).

Contrast such state of affairs to the vision encapsulated by Isaac Newton’s statement that “Truth is ever to be found in simplicity, and not in the multiplicity and confusion of things”[18]. This stance is such an integral part of the canon of science as to constitute a form of First Commandment. Newton himself was building on the shoulders of René Descartes, the French polymath who helped canonize reductionism (Chapter 4) as part of Western thinking and philosophy. To him the world was to be regarded as a clockwork mechanism. That is to say, in order to understand something it is necessary to investigate the parts and then reassemble the components to recreate the whole[19] – the essence of reductionism. Fast-forward to the second half of the twentieth century. The dream of extending the successes of the Cartesian mindset captivated biologists. As Francis Crick, one of the co-discoverers of the structure of DNA put it, “The ultimate aim of the modern movement in biology is to explain all biology in terms of physics and chemistry”[20]. Reductionism indeed.

So, where is neuroscience today? The mechanistic tradition set off by Newton’s Principia Mathematica – arguably the most influential scientific achievement ever – is a major conceptual driving force behind how brain scientists think. Although many biologists view their subject matter as different from physics, for example, scientific practice is very much dominated by a mechanistic approach. The present book embraces a different way of thinking, one that revolves around ideas of “collective phenomena”, ideas of networks, and ideas about complexity. The book is as much about what we know about the brain, as a text to stimulate how we can think about the brain as a highly complex network system.

Before we can start our journey, we need to define a few terms and learn a little about anatomy.


[1] The full quote from the paper abstract was “we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography” (Glasser et al., 2016; p. 171).

[2] The terms “area” and “region” are not distinguished in the book. Neuroscientists more commonly use the former to specify putatively well-delineated parts.

[3] For his work on the structure of the nervous system, Ramon y Cajal was awarded the Nobel Prize in 1906.

[4] Burdach actually described what is currently called the “basolateral amygdala”. Other parts were added later by others, notably Johnston (1923). See Swanson and Petrovich (1988).

[5] When communicated by the media, neuroscience findings are almost exclusively phrased in highly modular terms. We’ve all heard headlines about the amygdala being the “fear center in the brain”, the existence of a “reward center,” as well as “spots” where memory, language, and so on, take place. Whereas the media’s tendency to oversimplify is clearly at play here, neuroscientists are at fault, too.

[6] Watts and Strogatz (1988).

[7] Barabási, A. L., & Albert, R. (1999).

[8] For a related discussion, see Krakauer et al. (2017).

[9] Naqvi et al. (2007).

[10] Addiction: Navqi et al. (2007); faces: Kanwisher et al. (1997); attention: Noudoost et al. (2010).

[11] But the stimulation studies described by Noudoost et al. (2010) come closest.

[12] Krakauer et al. (2017).

[13] Bechtel (2008, p. 13).

[14] Genon et al. (2018, p. 362): Although the statement refers to “many regions,” the point applies to most if not all regions.

[15] Adolphs, R., & Anderson, D. J. (2018, p. 31).

[16] Instructions 1-13 are verbatim from Woodward (2013, p. 43).

[17] Example borrowed from Striedter (2005) based on the work by Endler (1995).

[18] Cited by Mazzocchi (2008, p. 10).

[19] Mazzocchi (2008, p. 10).

[20] Mazzocchi, F. (2008, p. 10).