The brain is not hierarchically organized: Is it a “small world” or actually a “tiny world”?

Engineers think of systems in terms of inputs and outputs. In a steam engine, heat (input) applied to water produces steam, and the force generated pushes a piston back and forth inside a cylinder; the pushing force is transformed into rotational force (output) that can be used for other purposes. Reasoning in terms of input-output relationships became even more commonplace with the invention of computers and the concept of a software program. Thus, it’s only natural to consider the brain in terms of the “inflow” and “outflow” of signals tied to sensory processing and motor acts. During sensory processing, energy of one kind or another is transduced, action potential reach the cortex, and are further processed. During motor acts, activity from the cortex descends to the brainstem and spinal cord, eventually moving muscles. Information flows in for perception and flows out for action.

Let’s describe a substantially different view based on what I call functionally integrated systems. To do so, it helps to discuss six broad principles of brain organization. To anticipate, some of the consequences of the principles are as follows: the brain’s anatomical and functional architectures are highly non-modular; signal distribution and integration are the norm, allowing the confluence of information related to perception, cognition, emotion, motivation, and action; and, the functional architecture is composed of overlapping networks that are highly dynamic and context-sensitive[1].

Principle 1: Massive combinatorial anatomical connectivity

Dissecting anatomical connections is incredibly painstaking work. Chemical substances are injected at a specific location and, as they diffuse along axons, traces of the molecules are detected elsewhere. After diffusion stabilizes (in some cases, it takes weeks), tissue is sectioned in razor-thin slices that are further treated chemically and inspected, one by one. Because the slices are very thin, researchers focus on examining particular target regions. For example, one anatomist may make injections in a few sites in parietal cortex, and examine parts of lateral prefrontal cortex for staining that indicates the presence of an anatomical connection. Injection by injection, study by study, neuroanatomists have compiled enough information to provide a good idea of the pathways crisscrossing the brain.

Figure 1. A graph is a mathematical object that can represent arbitrary collections of elements (person, computer, genes), called nodes (circles), and their relationships, called edges (lines joining pairs of nodes).

Although anatomical knowledge of pathways (and their strengths) is incomplete, the overall picture is one of massive connectivity. This is made clearer when computational analyses are used to combine the findings across a large number of individual studies. A field of mathematics that comes in handy here is called graph theory, which has become popular in the last two decades under the more appealing term of “network science.” Graphs are very general abstract structures that can be used to formalize the interconnectivity of social, technological, or biological systems. They are defined by nodes and the links between them, called edges (Figure 1). A node represents a particular object: a person in a social group, a computer in a technological network, or a gene in a biological system. Edges indicate a relationship between the nodes: people who know each other, computers that are physically connected, or genes with related functions. So, in the case of the brain, areas can be represented by nodes, and edges interlinking them represent a pathway between them. (A so-called directed graph can be used if the direction of the pathways are known; for example, from A to B but not vice versa.)

Graph analysis demonstrates that brain regions are richly interconnected, a property of both cortical and subcortical regions. In the cortex, this property is not confined to the prefrontal cortex (which is often highlighted in this regard), but is observed for all lobes. Indeed, the overall picture is one of enormous connectivity, leading to combinatorial pathways between sectors. In other words, one can go from point A to point B in multiple ways, much like navigating a dense set of roads. Computational neuroanatomy has greatly refined our understanding of connectivity.

High global accessibility. Rumors spread more or less effectively depending on the pattern of communication. It will spread faster and farther among a community of college students than among faculty professors, assuming that the former is more highly interconnected than the latter. This intuition is formalized by a graph measure called efficiency, which captures information spread effectiveness across members of a network, even those who are least connected (in the social setting, the ones who know or communicate the least with other members). How about the brain? Recent studies suggest that its efficiency is very high. Signals have the potential to travel efficaciously across the entire organ, even between parts not near each other, and even between parts that are not directly connected; in this case, the connection is indirect, such as travelling through C, and possibly D, to get from A to B. The logic of the connectivity structure seems to point to a surprising property: physical distance matters little.

For many neuroscientists, this conclusion is surprising, if not counterintuitive. Their training favors a processing-is-local type of reasoning. After all, areas implement particular functions. That is to say, they are the proper computational units – or so the thinking goes (see chapter 4). This interpretation is reinforced by the knowledge that anatomical pathways are dominated by short-distance connections. In fact, 70% of all the projections to a given locus on the cortical sheet arise from within 1.5 to 2.5 mm (to give you an idea, parts of occipital cortex toward the back of the head are a good 15 cm away from the prefrontal cortex). Doesn’t this dictate that processing is local, or quasi-local? This is where math, and the understanding of graphs, helps sharpen our thinking.

In a 1998 paper entitled “Collective dynamics of ‘small-world’ networks” (cited tens of thousands of times in the scientific literature), Duncan Watts and Steven Strogatz showed that systems made of locally-clustered nodes (those that are connected to nearby nodes), but that also have a small number of random connections (which link arbitrary pairs of nodes), allow all nodes to be accessible within a small number of connectivity steps[2]. Starting at any arbitrary node, one can reach another, no matter which one, by traversing a few edges. Helping make the paper a veritable sensation, they called this property “small-world”. The strength of their approach was to show that this is a hallmark of graphs with such connectivity pattern, irrespective of the type of data at hand (social, technological, or biological). Watts and Strogatz emphasized that the arrangement in question – what’s called network topology – allows for enhanced signal-propagation speed, computational power, and synchronizability between parts. The paper was a game changer in how one thinks of interconnected systems[3].

In the 2000s, different research groups proposed that the cerebral cortex is organized as a small world. If correct, this view means that signal transduction between parts of the cortex can be obtained via a modest number of paths connecting them. It turns out that the brain is more interconnected than would be necessary for it to be a small world[4]. That is to say, there are more pathways interconnecting regions than the minimum needed to attain efficient communicability. So, while it’s true that local connectivity predominates within the cortex, there are enough medium- and long-range connections – in fact, more than the “minimum” required – for information to spread around remarkably well.

Connectivity core (“rich club”). A central reason the brain is not a small world is because it contains a subgroup of regions that is very highly interconnected. The details still are being worked out, not least because knowledge of anatomical connectivity is incomplete, especially in humans.

In 2010, the computer scientists Dharmendra Modha and Raghavendra Singh gathered data from over four hundred anatomical tracing studies of the macaque brain[5]. Unlike most investigations, which have focused on the cortex, they included data on subcortical pathways, too (Figure 2). Their computational analyses uncovered a “tightly integrated core circuit” with several properties: (i) it is a set of regions that is far more tightly integrated (that is, more densely connected) than the overall brain; (ii) information likely spreads more swiftly within the core than through the overall brain; and (iii) brain communication relies heavily on signals being communicated via the core. The proposed core circuit was distributed throughout the brain; it wasn’t just in the prefrontal cortex, a sector often underscored for its integrative capabilities, or some other anatomically well-defined territory. Instead, the regions were found in all cortical lobes, as well as subcortical areas such as the thalamus, striatum, and amygdala.

Figure 2. Massive interconnectivity between all brain sectors. Computational analysis of anatomical connectivity by collating pathways (lines) from hundreds of studies. To improve clarity, pathways with a common origin or destination are bundled together (otherwise the background would be essentially black given the density of connections). Figure from Modha and Singh (2010).

In another study, a group of neuroanatomists and physicists collaborated to describe formal properties of the monkey cortex[6]. They discovered a set of 17 brain regions across parietal, temporal, and frontal cortex that is heavily interconnected. For these areas, 92% of the connections that could potentially exist between region pairs have indeed been documented in published studies. So, in this core group of areas, nearly every one of them can talk directly to all others, a remarkable property. In a graph, when a subset of its nodes is considerably more well connected than others, it is sometimes referred to as a “rich club,” in allusion to the idea that in many societies a group of wealthy individuals tends to be disproportionately influential.  

Computational analysis of anatomical pathways has been instrumental in unravelling properties of the brain’s large-scale architecture. We now have a vastly more complete and broader view of how different parts are linked with each other. At the same time, we must acknowledge that the current picture is rather incomplete. For one, computational studies frequently focus on cortical pathways. As such, they are cortico-centric, reflecting a bias of many neuroscientists who tend to neglect the subcortex when investigating connectional properties of the brain. In sum, the theoretical insights by network scientists about “small worlds” demonstrated that signals can influence distal elements of a system even when physical connections are fairly sparse. But cerebral pathways vastly exceed what it takes to be a small world. Instead, what we find is a “tiny world.”

[1] The ideas in this chapter are developed more technically elsewhere (Pessoa, 2014, 2017).

[2] Watts and Strogatz (1998).

[3] Another very influential paper was published soon after by Barabási and Albert (1999). The work was followed by an enormous amount of research in the subsequent years.

[4] Particularly useful here is the work by Kennedy and collaborators. For discussion of mouse and primate data, see Gămănuţ et al. (2018).

[5] Modha and Singh (2010). Modha, D. S., & Singh, R. (2010). Network architecture of the long-distance pathways in the macaque brain. Proceedings of the National Academy of Sciences, 107(30), 13485-13490.

[6] Markov et al. (2013). Markov, N. T., Ercsey-Ravasz, M., Van Essen, D. C., Knoblauch, K., Toroczkai, Z., & Kennedy, H. (2013). Cortical high-density counterstream architectures. Science, 342(6158).