The Networked Brain

Chapter 8: Complex systems: the science of interacting parts

What kind of object is the brain? The central premise of this book is that it cannot be neatly decomposed into a set of parts, so that each one can be understood on its own. Instead, it’s a highly networked system that needs to be understood differently. The language that is required is the one of complex systems, which we now describe in intuitive terms. Whereas mathematics is needed to formalize it, illustration of its central concepts provides the reader with “intuition pumps”. Thinking in terms of complex systems frees us from the shackles of linear thinking, enabling explanations built with “collective computations” that elude simplistic narratives.

Kelp carpets[1]

Many coastal environments are inhabited by a great variety of algae, including a brown seaweed called kelp[2]. The distribution of kelp can be very uneven, with abundance in some places, near absence in others. Ecologists noticed that in some coastal communities with tide pools and shallow waters largely devoid of kelp and other algae, killer whales (also called orcas) are also plentiful. The orcas don’t eat kelp, so the negative relationship between the two must be purely incidental, right? Sea otters are a frequent member of coastal habitats, too, and their population has rebounded strongly since they gained protected status in 1911, just at a time when they numbered only 2,000 worldwide. Could they be the ones responsible for the lack of kelp in some areas? Sea otters don’t consume kelp either, so what could be going on? It turns out that sea urchins are one of the most prevalent grazers of algae and kelp. And otters snack on sea urchins in large amounts. Therefore, because the presence of otters suppresses the urchin population, they have a direct impact on the kelp carpeting along the coast: the more otters, the fewer the urchins, and the richer the kelp. We see that otters and kelp are linked via a double negative logic: if you suppress a suppressor, the net effect is an increase (of kelp in this case).

But how about the orcas? The preferred meals of killer whales are sea lions and other whales, which are larger and richer in the fat content they need. But in some places these favored foods have become scarce. And with the large increase of otters given conservation efforts, the orcas appear to have turned to them as replacement meals. Altogether, we have a four-way relationship:  5killer whales à 6otters à 5 urchins à 6kelp. This cascade is not a one-of-a-kind illustration of indirect effects, it’s at the core of how ecological systems function. In other words, complex webs of interrelationships with many indirect effects – in fact, with multiple-step-removed indirect effects — are pretty much the norm.

Bacterial decision making

Bacteria are extremely simple creatures. But when they are grown in a medium containing glucose and carbon dioxide, they can make all twenty kinds of amino acids, which are the building blocks of proteins, the latter being the ones that do a lot of the heavy lifting in living things[3]. When a specific amino acid is added to a glob of bacteria that is generating amino acids, the biosynthesis of that specific amino acid stops soon thereafter. But how do bacteria know that they no longer need to synthesize that particular amino acid?

In the 1950s, biochemists started to understand that amino acids are manufactured via several steps, starting with an initial “precursor” that is modified by a series of reactions leading to the amino acid. We can represent this in the following way: P –> I1 –> I2 –> … –> amino acid, where P stands for the chemical precursor and I for the various intermediate products. They also discovered that introducing a certain amino acid (call it amino acid A) could terminate the synthesis of other amino acids, indicating that amino acid A was having a negative effect somewhere along a synthesis pathway like the one above. What was more interesting though was that providing the amino acid isoleucine inhibited the production of isoleucine itself. If the system is producing a specific amino acid (isoleucine), one could imagine that adding more of it would further increase the overall amount of this compound. But just the opposite was found. Thus, the system automatically adjusted itself to prevent the overproduction of isoleucine.

This is an example of negative feedback regulation. Feedback seems simple enough, thermostats and auto-pilot systems being common in the modern world. However, it carries the core of a fundamental property: the ability of a system, even if very basic, to regulate itself. In its simplest form the idea is rather benign and unproblematic. But feedback muddies our intuitions about causation.

Figure 8.1. Feedforward, feedback, and interacting systems. System 1 is purely feedforward, while system 2 contains negative feedback. System 3 contains two chains, one that produces A, one that produces B. Each of them contains negative feedback. The chains are also positively coupled so that production of A encourages production of B, and vice versa.

Consider again a system without feedback (call it system 1) (Figure 8.1). A precursor P causes some intermediate product I, which produces another chemical, and so on, until the last intermediate in the chain produces A. As far as causation goes, system 1 is straightforward. Now, consider system 2, which includes negative feedback. Here, if there’s too much of A, the production of A itself will be inhibited so that its concentration will not increase further. System 2 is not hard to understand, but with the small change (A loops back on the system), A has a causal effect on itself, which means that A is both an effect (A is produced by the system) and a cause (A affects itself). Let’s up things now, and consider system 3, which has feedback loops, and interactions between two amino-acid pathways. Here, amino acids A and B affect and reinforce each other; production of A stimulates the growth of B, and production of B stimulates the growth of A. But production and growth of A and B are not unbounded because they have negative loops within their respective systems. Clearly, mapping out the mechanisms of production of the amino acids in system 3 is substantially more challenging than the other two examples.

There is nothing odd, or potentially mysterious, about system 3. All the functioning is mechanistic, in the sense that all parts function according to the standard rules of chemistry and physics. All that is present in interdependence. Why do we even need to bring this up? In many experimental disciplines researchers are trained to think about causation pathways like that of system 1, and to some extent of system 2. Thus, causation appears to work in relatively simple ways; for example, higher levels of cholesterol “cause” (with multiple intermediate steps) greater heart-disease problems. As elaborated in this chapter, however, systems like system 3 can exhibit “complex” behaviors and “emergent” properties that are qualitatively different from those seen in simpler cases. And if the systems studied in biology are heavily interdependent, the field needs a change in perspective to move forward.

Of predators and prey

The Italian biologist Umberto D’Ancona was a prolific scientist who published over 300 papers and described numerous species. While studying fish catches in the Adriatic Sea, he noticed that the abundance of certain species increased markedly during the years of World War I[4], a time when fishing intensity reduced because of the war. Puzzled by the observation, he discussed it with the Italian mathematician and physicist Vito Volterra who had become interested in mathematical biology, and happened to be courting his daughter (incidentally, the two would later marry). It’s worth pointing out that at the time that D’Ancona made his observations ecology was not yet a systematic field of study (Charles Elton’s now-classic Animal Ecology was published in 1927).

In the early 1920s, Volterra, and independently Alfred Lotka, mathematically described how interactions between a predator and its prey could be precisely written out (in Volterra’s case, prey being fish, and predator being fishermen). While we don’t need to concern ourselves with the equations here, the model specifies that the number of predators, y, decays in the absence of prey, and increases based on the rate at which they consume prey. At the same time, the number of prey, x, grows if left unchecked, and decays given the rate at which it is preyed upon. The key point here is that x depends on y and, conversely, y depends on x. This interdependence means that we can eschew a description in terms of simple causation (say, “predation causes prey numbers to fall”) and consider the predator-prey system as a unit. Put differently, predator and prey numbers co-evolve, and as such characterizing and understanding them implies studying the “system,” predator plus prey.

By doing so, we aren’t saying that there are no causal interactions taking place. Fishermen do kill fish and have an immediate impact on their population. But we can treat the predator-plus-prey pair as the object of interest. Whereas this is a relatively minor conceptual maneuver in this case, it will prove instrumental when a larger constellation of actors interacts.

Against reductionism

The Lotka-Volterra predator-prey model formalized the relationship between a single predator species and a single prey species. Of course, natural habitats are not confined to two species but, as the killer whale and kelp example illustrated, multi-species interactions are the norm. Thus, unraveling an entire set of interconnections is required for deeper understanding.

The prevailing modus operandi of science can be summarized as follows: “explain phenomena by reducing them to an interplay of elementary units which could be investigated independently of each other.”[5] Such reductionistic approach reached its zenith, perhaps, with the success of chemistry and particle physics in the twentieth century. In the present century, its power is clearly evidenced by dramatic progress in molecular biology and genetics. At its root, this attitude to science “resolves all natural phenomena into a play of elementary units, the characteristics of which remain unaltered whether they are investigated in isolation or in a complex”.

In the 1940s and 1950s, “systems thinking” started to offer an alternative mental springboard. Scholars surmised that many objects of study could be studied in terms of collections of interacting parts, an approach that could be applied to physical, biological, and even social problems. The framework developed, which some called complex systems theory, doesn’t challenge the status and role of “elementary” units (no one was about to rescind Nobel prizes such as Ernest Rutherford’s for the atomic model!). Again in the words of one of its chief proponents, Von Bertalanffy, quoted in the previous paragraph, it “asserts the necessity of investigating not only parts but also relations of organization resulting from a dynamic interaction and manifesting themselves by the difference in behavior of parts in isolation and in the whole organism”.

What does it mean to say “difference in behavior of parts in isolation and in the whole organism”. Enter emergence, a term originally coined in the 1870s to describe instances in chemistry and physiology where new and unpredictable properties appear that aren’t clearly ascribable to the elements from which they arise[6]. When amino acids organize themselves – that is, self-organize – into a protein, the protein can carry out enzymatic functions that the amino acids on their own cannot. More importantly, they behave differently as part of the protein than they would on their own. But it’s actually more than that. The dynamics of the system (that is, the protein) closes off some of the behaviors that would be open to the components (amino acids) were they not captured by the overall system. Once folded up into a protein, the amino acids find their activity regulated – they behave differently. Thus, one definition of emergence is as follows: a property that is observed when multiple elements interact that is not present at the level of the elements. Accordingly, it becomes meaningful to talk about two levels of description, a lower level of elements, and a higher level of the system.

The growth of the complex systems approach was quickly popularized by expressions such as “system,” “gestalt,” “organism,” “wholeness,” and of course the much-used “the whole is more than the sum of its parts.” In a manner that anticipated debates that would persist for decades, and still do, Von Bertalanffy stated as early as 1950 that “these concepts have often been misused, and they are of a vague and somewhat mystical character.”[7] Even more presciently, he said that the “exact scientist therefore is inclined to look at these conceptions with justified mistrust.”

Consider research in biology. The stunning developments of molecular biology, for one, raise the hope that all seemingly emergent properties can eventually be “explained away” and thereby deduced from lower level characteristics and laws – the “higher” level can therefore be reduced to the “lower” level. Reduction to basic physics and chemistry become, then, the ultimate goal of scientific explanations. In this view, emergence is relegated to a sort of “promissory reductionism” – if not outright discredited – given that at a more advanced stage of science emergent properties will be entirely captured by lower-level properties and laws. No doubt, it is extremely hard to argue against this line of argument. As the philosopher Terrence Deacon nicely states, looking at the world in terms of constituent parts of larger entities seems like an “unimpeachable methodology.” It is as old as the pre-Socratic Greek thinkers and remains almost an “axiom of modern science.”[8]

Both scientifically and philosophically speaking, the friction caused by the idea of emergence arises because it’s actually unclear what precisely emerges. For example, what is it about amino acids as part of proteins that differs from free floating ones? The question revolves around the exact status of “emergent properties.” Philosophers formalize the terms used by talking about the ontological status of emergence, that is, concerning the proper existence of the higher-level properties. Do emergent properties point to the existence of new laws that are not present at the lower level? Is something fundamentally irreducible at stake? These questions are so daunting that they remain by and large unsolved – and subject to vigorous intellectual battles.

Figure 8.2. Levels of explanation and scientific reduction. (A) Describing airplane aerodynamics in terms of elementary particles such as quarks is clearly not very useful. (B) Molecular configurations in three dimensions may be investigated by determining how their properties depend on chemical interactions between amino acids (which themselves determine protein structure).

Fortunately, we don’t need to crack the problem here, and can instead use lower and higher levels pragmatically when they are epistemically useful – when the theoretical stance advances knowledge. To provide an oversimplified example, we don’t need to worry about the status between quarks and aerodynamics. Massive airplanes are of course made of matter, which are agglomerations of elementary particles such as quarks (when put together quarks form things like protons and neutrons, the stable components of atomic nuclei). But when engineers design a new airplane, they consider the laws of aerodynamics, the study of the motion of air, and particularly the behavior of a solid object, such an airplane wing, in air – they need no training at all in particle physics! So there’s no need to really agonize about the “true” relationship between aerodynamics and particle physics. The practical thing to do is simply to study the former.

One could object to the example above because the inherent levels of particle physics and aerodynamics are far removed (Figure 8.2), one level too “micro” and the other too “macro.” More interesting cases present themselves when the constituent parts and the higher-level objects are closer to each other. For example, the behavior of an individual ant and the collective behavior of the ant colony; or the flight of a pelican and the V-shape pattern of the flock. And of course, amino acids and proteins. As the researcher Alicia Juarrero says, it’s particularly intriguing when purely “deterministic systems exhibit organized and apparently novel properties, seemingly emergent characteristics that should be predictable in principle, but are not in fact”[9]. And it’s all the more fascinating when the systems involved are made of very simple parts that obey straightforward rules. Understanding higher-level properties without having to solve the ontological question – are these properties truly new? – is clearly beneficial.

We encountered John von Neumann previously in Chapter 2. He not only was one of the major players in defining computer science as we know it, but his contributions to mathematics and physics are astounding. For instance, in 1932 he was the first to establish a rigorous mathematical framework for quantum mechanics. One of his smaller contributions was the invention of cellular automata, and this without the aid of computers, just with pencil and paper. (Another “minor” contribution by von Neumann was the invention of game theory, which is the study of mathematical models of conflict and cooperation.) A simple way to think of cellular automata is to imagine a piece of paper onto which a regular grid is drawn. Each “cell” of the grid can be in one of two states (active or inactive, or 0 or 1; think of a computer bit). The cells transition state according to simple, but precise rules, depending on the state of the cells’ neighborhood. Different types of spatial neighborhood arrangements can be utilized but consider the simplest case, with just the cells to the left and to the right of a reference cell. A rule could turn the center cell active if either neighboring cell is active (called the OR rule); another rule could turn the center on if both neighbors are active (called the AND rule). If the cells start at some state, for instance a random configuration of 0s and 1s, one can let them change states according to a specific set of rules and observe the overall behaviors that ensue (imagine a screen with pixels turning on and off). Remarkably, even simple cellular automata can exhibit rather complex behaviors, including the formation of hierarchically organized patterns that fluctuate periodically.

Although cellular automata were not widely known outside computer science circles, the idea was popularized more broadly with the invention of the Game of Life (or simply, Life). The game has attracted much interest not least because of the surprising ways in which patterns can evolve. From a relatively simple set of rules, some of the observed patterns are reminiscent of the rise, fall, and alterations of a society of living organisms, and have been used to illustrate the notion that “design” and “organization” can emerge in the absence of a designer[10].

The examples provided by cellular automata, and others discussed in this chapter, suggest that we can adopt a pragmatic stance regarding the “true” standing of emergence. We can remain agnostic about the status itself, but adopt a complex systems framework to advance the understanding of objects with many interacting elements. Let’s discuss some ways in which this viewpoint is taking place in the field of ecology, the research area that originated the predator-prey models of Lotka and Volterra.

How do species interact?

Ecology is the scientific study of interactions between organisms and their environment. A major topic of interest centers around the cooperation and competition between species. One may conjure investigators withstanding the blazing tropical sun to study biodiversity in the Amazon, or harsh artic winters to study fluctuations in the population of polar bears. Although such field work is necessary to gather data, theoretical work is equally needed.

What are the mechanisms of species coexistence?[11] And how does the enormous diversity of species seen in nature per­sist despite differences in the ability to compete for survival. Diversity indeed. For example, a 25-hectare plot in the Amazon rainforest contains more than 1,000 tropical tree species. As we’ve seen, in the 1920s mathematical tools to model the dynamics of predator-prey systems were developed. The equations for these systems were further extended and refined in the subsequent decades, and continue to be the object of much research. The study of species coexistence focuses almost exclusively on pairs of competitors, so that when considering large groups of plants or animals, the strategy is to look at all possible couples. For example, one studies 3 pairs when 3 species are involved, or 6 pairs when 4 species are considered; more generally,  interactions between n species. Do we lose anything when examining only pairwise interactions? Higher-order interactions are missed, as when the effect of one competitor on another depends on the population density of a third species, or an even larger number of them. For example, the interaction between cheetahs and gazelles might be affected by hyenas, as the latter can easily challenge the relatively scrawny cheetahs after the kill, especially when not alone (Figure 8.3).

Figure 8.3. Species interactions. (A) Two-way interaction, such as between predator and prey. (B) A higher-order interaction occurs when an additional element affects the way the two-way interaction behaves.

The importance of high-order effects is that, at times, they make predictions that diverge from what would be expected from only pairwise interactions. In a classic paper entitled “Will a large complex system be stable?” the theoretical biologist Robert May showed formally that community diversity destabilizes ecological systems. In other words, diverse communities lead to instabilities such as the local elimination of certain species. Recent theoretical results show, however, that higher-order interactions can cause communities with greater diversity to be more stable than their species-poor counterparts, contrary to classic theory that is based on pairwise interactions[12]. These results illustrate that to understand a complex system (diverse community) of interacting players (species), we must determine (emergent) properties at the collective level (including coexistence and biodiversity). Not only do we need to consider interactions, but we need to describe them richly enough for collective properties to be unraveled.

Neural networks

Ideas about complex systems and the closely related movement of cybernetics didn’t take long to start influencing thinking about the brain. For example, W. Ross Ashby outlined in his 1952 book Design for a Brain the importance of stability. Cybernetics researchers were interested in how systems regulate themselves and avoid instability. In particular, when a system is perturbed from its current state, how does it automatically adapt its configuration to minimize the effects of such disturbances? Not long afterward, the field of artificial neural networks (or simply neural networks) started to materialize. The growth of this new area proceeded in parallel with “standard” artificial intelligence. Whereas the latter sought to design intelligent algorithms by capitalizing on the power of newly developed computers, neural networks looked at the brain for inspiration. The general philosophy was simple: collections of simple processing elements, or “neurons,” when arranged in particular configurations called architectures generate sophisticated behaviors. And by specifying how the connections between artificial neurons change across time, neural networks learn new behaviors.

Many types of architecture were investigated, including purely feedforward and recurrent networks. In feedforward networks, information flows from an input layer of neurons where the input (for instance, an image) is registered, to one or more intermediate layers, eventually reaching an output layer, where the output is coded (indicating that the input image is a face, say). Recurrent networks, where connections can be both feedforward and feedback, are more interesting in the context of complex systems. In this type of organization, at least some connections are bidirectional and the systems can exhibit a range of properties. For example, competition can occur between parts of the network, with the consequent suppression of some kinds of activity and the enhancement of others[13]. Interested in this type of competitive process, in the 1970s, Stephen Grossberg, whom we mentioned in the previous chapter, developed Adaptive Resonance Theory. In the theory, a resonance is a dynamical state during which neuronal firings across a network are amplified and synchronized when they interact bidirectionally – they mutually support each other (see Figure 7.6 and accompanying text). Based on the continued development of the theory in the decades since its proposal, these types of bidirectional, competitive interactions have been used to explain a large number of experimental findings across the areas of perception, cognition, and emotion, for example[14].

Nonlinear dynamical systems

As we’ve seen, in the second half of the twentieth century complex systems thinking began to flourish and influence multiple scientific disciplines, from the social to the biological. The ideas gained considerable momentum with the development of an area of mathematics called nonlinear dynamical systems. It’s no exaggeration to say that nonlinear dynamical systems provide a language for complex systems. This branch of mathematics studies techniques that allow applied scientists to describe how objects change in time. It all started with the discovery of differential and integral calculus by Isaac Newton and Gottfried Leibniz in the last decades of the seventeenth century. Calculus is the first monumental achievement of modern mathematics and many consider it the greatest advance in exact thinking[15]. Newton, for one, was interested in planetary motion, and used calculus to describe the trajectories of planets in orbit.[16]

Research in dynamical systems revealed that even putatively simple systems can exhibit very rich behaviors. At first, this was rather surprising because mathematicians and applied scientists alike believed deterministic systems behave in a fairly predictable manner. Because of this intuition, many techniques relied on “linearization,” that is, considering a system to be approximately linear (at least when small perturbations are involved). What is a linear system? In essence, it is one that produces an output by summing its inputs: the more the input, the more the output, and in exact proportion to the inputs. Systems like this are predictable, and thus are stable, which is desirable when we design a system. When you change the setting on the ceiling fan to “2” it moves faster than at “1”; when set to “3” you don’t want it spinning out of control all of sudden!

The field of nonlinear systems tells us that “linear thinking” is just not enough. Approximating the behavior of objects via linear systems does not do justice to the complexity of behaviors observed in real situations, as most clearly demonstrated a property called chaos. Confusingly, “chaos” does not refer to erratic, or random behavior but, instead, to a property of systems that follow precise deterministic laws but appear to behave randomly. Although the precise definition of “chaos” is mathematical, we can think of it as describing complex, recurring, yet not exactly repeatable behaviors. (Imagine a leaf floating in a stream caught between rocks and circling around them in a way that is both repeating but not identical.) The theoretical developments in nonlinear dynamics were extremely influential because until the 1960s even mathematicians and physicists thought of dynamics in relatively simple terms[17].

The field of dynamical systems has greatly enriched our understanding of natural and artificial systems. Even those with relatively simple descriptions can exhibit behaviors that are not possible to predict with confidence. Nonlinear dynamical systems not only contribute to our view of how interacting elements behave, but defines both a language and a formal system to characterize “emergent” behaviors. In a very real sense, they have greatly helped demystify some of the vague notions described in the early days of systems thinking. We now have a precise way to tackle the question of “the sum is greater than its parts.”

The brain as a complex system

Complex systems are now a sprawling area encompassing applied and theoretical research. The goal of this chapter was to introduce the reader to some of its central ideas (a rather optimistic proposition without writing an entire book!). Whereas the science of complexity has evolved enormously in the past 70-odd years, experimental scientists are all-too-often anchored on a foundation that is skeptical of some the concepts discussed in this chapter. But with the mathematical and computational tools available now, there is little reason for that anymore.[18] What are some of the implications of complex systems theory to our goal of elucidating brain functions and how they relate to parts of the brain?

Interactions between parts. The brain is a system of interacting parts. At a local level, say within a specific region, populations of neurons interact. But interactions are not only local to the area, and a given behavior relies on communication between many regions. Anatomical connectivity provides the substrate for interactions that span multiple parts of the cortex, as well as bridging cortex and subcortex. This view stands in sharp contrast to a “localizationist” framework that treats regions as relatively independent units.

Levels of analysis. This concept is related to the previous one, but emphasizes a different point. All physical systems can be studied as multiple levels, from quarks up to the object of interest. Not in all cases it’s valuable to study the multiple levels (worrying about quarks in aerodynamics, say). But in the brain, studying multiple levels and understanding their combined properties is essential. One can think of neuronal circuits from the scale of a few neurons in a rather delimited area of space, to larger collections across broader spatial extents. Multiple spatial scales will be of interest, including large-scale circuits with multiple brain regions spanning cortex and subcortex. A possible analogy is the investigation of the ecology of the most biodiverse places on earth, including the Amazon forest and the Australian barrier reef. One can study these systems at very different spatial scales, from local patches of the forest and a few species to the entire coral reef with all its species.

Time, process. Complex systems, like the brain, are not static – they are inherently dynamic. As in predator-prey systems, it is useful to shift one’s perspective from one of simple cause-and-effect to that of a process that evolves in time – a natural shift in stance given the interdependence of the parts involved. When we say a “process” there need not be anything nebulous about it. For example, in the case of three-body celestial orbits under the influence of Newtonian gravity, the equations can be precisely defined and solved numerically to reveal the rich pattern of paths traversed[19]. Decentralization, heterarchy. Investigating systems in terms of the interactions between their parts fosters a way of thinking that favors decentralized organization. It is the coordination between the multiple parts that leads to the behaviors of interest, not a master “controller” that dictates the function of the system. In many “sophisticated” systems, and the brain is no exception, it is instinctive to think that many of its important functions depend on centralized processes. For example, the prefrontal cortex may be viewed as a convergence sector for multiple types of information, allowing it to control behavior (see Chapter 7). A contrasting view favors distributed processing via interactions of multiple parts. Accordingly, instead of information flowing hierarchically to an “apex region” where all the pieces are integrated,


[1] “Intuition pumps” comes from Dennett (2013).

[2] Kelp ecosystem example from Carroll (2016).

[3] Bacterial regulation example from Carroll (2016).

[4] https://en.wikipedia.org/wiki/Lotka-Volterra_equations (July 28, 2020)

[5] This and the next two quotes are from Von Bertalanffy (1950, pp. 219-220).

[6] Paragraph largely based on Juarrero (2000, p. 7). The term emergence appears to have first been proposed in the 1870s by George Henry Lewes in his book Problems of Life and Mind and taken up by Wilhelm Wundt in his book Introduction to Psychology.

[7] von Bertalanffy (1950, p. 225)

[8] Deacon (2011).

[9] Juarrero (1999, p. 6).

[10] https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life

[11] Paragraph draws from Levine et al. (2017).

[12] Bairey et al. (2016). Just a few years ago, Levine et al. (2017, p. 61) pointed out that “higher-order interactions need to be demys­tified to become a regular part of how ecologists envision coexistence, and identifying their mechanistic basis is one way of doing so.”

[13] Some forms of competition are possible in feedforward networks, too.

[14] Grossberg (2021).

[15] See quote by von Neumann (https://en.wikipedia.org/wiki/Calculus).

[16] The problem of stability was central to celestial mechanics. For example, what types of trajectories do two bodies, such as the earth and the sun, exhibit? The so-called two-body problem was completely solved by Johann Bernoulli in 1734 (his brother Jacob is famous for his contributions in the field of probability, including the first version of the law of large numbers). For more than two bodies (for example, the moon, the earth, and the sun), the problem has vexed mathematicians for centuries. Remarkably, the motion of three bodies is generally non-repeating, except in special cases. See https://en.wikipedia.org/wiki/Three-body_problem and http://www.sciencemag.org/news/2013/03/physicists-discover-whopping-13-new-solutions-three-body-problem.

[17] More technically, until the 1960s, attractors were thought in terms of simple geometric subsets of the phase space (roughly speaking, the possible states of a system), like points, lines, surfaces, and simple regions of three-dimensional space (https://en.wikipedia.org/wiki/Attractor).

[18] von Bertalanffy (1950) stated that concepts like “system” and “wholeness,” to which we could add “emergence” and “complexity,” are vague and even somewhat mystical, and indeed many scientists displayed mistrust when faced with these concepts.

[19] Šuvakov and Dmitrašinović, V. (2013). See also footnote (-3).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: