Skip to main content

Researchers Unveil New Framework for Understanding Emergence in Complex Systems.

 

Jupiter’s Great Red Spot, seen in this animation based on Voyager 1 and Hubble images, has swirled for hundreds of years, exemplifying how large-scale patterns and organization can arise from innumerable microscopic interactions.


HA few centuries ago, the swirling polychromatic chaos of Jupiter’s atmosphere spawned the immense vortex that we call the Great Red Spot.


From the frantic firing of billions of neurons in your brain comes your unique and coherent experience of reading these words.


As pedestrians each try to weave their path on a crowded sidewalk, they begin to follow one another, forming streams that no one ordained or consciously chose.


The world is full of such emergent phenomena: large-scale patterns and organization arising from innumerable interactions between component parts. And yet there is no agreed scientific theory to explain emergence. Loosely, the behavior of a complex system might be considered emergent if it can’t be predicted from the properties of the parts alone. But when will such large-scale structures and patterns arise, and what’s the criterion for when a phenomenon is emergent and when it isn’t? Confusion has reigned. “It’s just a muddle,” said Jim Crutchfield, a physicist at the University of California, Davis.


“Philosophers have long been arguing about emergence, and going round in circles,” said Anil Seth, a neuroscientist at the University of Sussex in England. The problem, according to Seth, is that we haven’t had the right tools — “not only the tools for analysis, but the tools for thinking. Having measures and theories of emergence would not only be something we can throw at data but would also be tools that can help us think about these systems in a richer way.”


Though the problem remains unsolved, over the past few years, a community of physicists, computer scientists and neuroscientists has been working toward a better understanding. These researchers have developed theoretical tools for identifying when emergence has occurred. And in February, Fernando Rosas, a complex systems scientist at Sussex, together with Seth and five co-authors, went further, with a framework for understanding how emergence arises.


A complex system exhibits emergence, according to the new framework, by organizing itself into a hierarchy of levels that each operate independently of the details of the lower levels. The researchers suggest we think about emergence as a kind of “software in the natural world.” Just as the software of your laptop runs without having to keep track of all the microscale information about the electrons in the computer circuitry, so emergent phenomena are governed by macroscale rules that seem self-contained, without heed to what the component parts are doing.


Using a mathematical formalism called computational mechanics, the researchers identified criteria for determining which systems have this kind of hierarchical structure. They tested these criteria on several model systems known to display emergent-type phenomena, including neural networks and Game-of-Life-style cellular automata. Indeed, the degrees of freedom, or independent variables, that capture the behavior of these systems at microscopic and macroscopic scales have precisely the relationship that the theory predicts.


No new matter or energy appears at the macroscopic level in emergent systems that isn’t there microscopically, of course. Rather, emergent phenomena, from Great Red Spots to conscious thoughts, demand a new language for describing the system. “What these authors have done is to try to formalize that,” said Chris Adami, a complex-systems researcher at Michigan State University. “I fully applaud this idea of making things mathematical.”


A Need for Closure.


Rosas came at the topic of emergence from multiple directions. His father was a famous conductor in Chile, where Rosas first studied and played music. “I grew up in concert halls,” he said. Then he switched to philosophy, followed by a degree in pure mathematics, giving him “an overdose of abstractions” that he “cured” with a Ph.D. in electrical engineering.


A few years ago, Rosas started thinking about the vexed question of whether the brain is a computer. Consider what goes on in your laptop. The software generates predictable and repeatable outputs for a given set of inputs. But if you look at the actual physics of the system, the electrons won’t all follow identical trajectories each time. “It’s a mess,” said Rosas. “It’ll never be exactly the same.”


The software seems to be “closed,” in the sense that it doesn’t depend on the detailed physics of the microelectronic hardware. The brain behaves somewhat like this too: There’s a consistency to our behaviors even though the neural activity is never identical in any circumstance.


Rosas and colleagues figured that in fact there are three different types of closure involved in emergent systems. Would the output of your laptop be any more predictable if you invested lots of time and energy in collecting information about all the microstates — electron energies and so forth — in the system? Generally, no. This corresponds to the case of informational closure: As Rosas put it, “All the details below the macro are not helpful for predicting the macro.”


What if you want not just to predict but to control the system — does the lower-level information help there? Again, typically no: Interventions we make at the macro level, such as changing the software code by typing on the keyboard, are not made more reliable by trying to alter individual electron trajectories. If the lower-level information adds no further control of macro outcomes, the macro level is causally closed: It alone is causing its own future.


This situation is rather common. Consider, for instance, that we can use macroscopic variables like pressure and viscosity to talk about (and control) fluid flow, and knowing the positions and trajectories of individual molecules doesn’t add useful information for those purposes. And we can describe the market economy by considering companies as single entities, ignoring any details about the individuals that constitute them.


The existence of a useful coarse-grained description doesn’t, however, by itself define an emergent phenomenon, said Seth. “You want to say something else in terms of the relationship between levels.” Enter the third level of closure that Rosas and colleagues think is needed to complete the conceptual apparatus: computational closure. For this, they have turned to computational mechanics, a discipline pioneered by Crutchfield.


Crutchfield introduced a conceptual device called the ฮต- (epsilon) machine. This device can exist in some finite set of states and can predict its own future state on the basis of its current one. It’s a bit like an elevator, said Rosas; an input to the machine, like pressing a button, will cause the machine to transition to a different state (floor) in a deterministic way that depends on its past history — namely, its current floor, whether it’s going up or down and which other buttons were pressed already. Of course, an elevator has myriad component parts, but you don’t need to think about them. Likewise, an ฮต-machine is an optimal way to represent how unspecified interactions between component parts “compute” — or, one might say, cause — the machine’s future state.


Computational mechanics allows the web of interactions between a complex system’s components to be reduced to the simplest description, called its causal state. The state of the complex system at any moment, which includes information about its past states, produces a distribution of possible future states. Whenever two or more such present states have the same distribution of possible futures, they are said to be in the same causal state. Our brains will never twice have exactly the same firing pattern of neurons, but there are plenty of circumstances where nevertheless we’ll end up doing the same thing.


Rosas and colleagues considered a generic complex system as a set of ฮต-machines working at different scales. One of these might, say, represent all the molecular-scale ions, ion channels and so forth that produce currents in our neurons; another represents the firing patterns of the neurons themselves; another, the activity seen in compartments of the brain such as the hippocampus and frontal cortex. The system (here the brain) evolves at all those levels, and in general the relationship between these ฮต-machines is complicated. But for an emergent system that is computationally closed, the machines at each level can be constructed by coarse-graining the components on just the level below: They are, in the researchers’ terminology, “strongly lumpable.” We might, for example, imagine lumping all the dynamics of the ions and neurotransmitters moving in and out of a neuron into a representation of whether the neuron fires or not. In principle, one could imagine all kinds of different “lumpings” of this sort, but the system is only computationally closed if the ฮต-machines that represent them are coarse-grained versions of each other in this way. “There is a nestedness” to the structure, Rosas said.


A highly compressed description of the system then emerges at the macro level that captures those dynamics of the micro level that matter to the macroscale behavior — filtered, as it were, through the nested web of intermediate ฮต-machines. In that case, the behavior of the macro level can be predicted as fully as possible using only macroscale information — there is no need to refer to finer-scale information. It is, in other words, fully emergent. The key characteristic of this emergence, the researchers say, is this hierarchical structure of “strongly lumpable causal states.”


Leaky Emergence.


The researchers tested their ideas by seeing what they reveal about a range of emergent behaviors in some model systems. One is a version of a random walk, where some agent wanders around haphazardly in a network that could represent, for example, the streets of a city. A city often exhibits a hierarchy of scales, with densely connected streets within neighborhoods and much more sparsely connected streets between neighborhoods. The researchers find that the outcome of a random walk through such a network is highly lumpable. That is, the probability of the wanderer starting in neighborhood A and ending up in neighborhood B — the macroscale behavior — remains the same regardless of which streets within A or B the walker randomly traverses.


The researchers also considered artificial neural networks like those used in machine-learning and artificial-intelligence algorithms. Some of these networks organize themselves into states that

Comments

Popular posts from this blog

JWST Just Dropped a Space Banger – Meet HH 30, the Cosmic Baby Star with an Attitude!

  ๐Ÿš€Hubble Found It, Webb Flexed on It! NASA, ESA, and CSA’s James Webb Space Telescope (JWST) just hit us with another mind-blowing “Picture of the Month,” and this time, it’s all about HH 30 —a baby star with a dramatic flair! Sitting pretty in the Taurus Molecular Cloud, this young star is rocking a protoplanetary disc that’s literally glowing with potential future planets. And oh, it’s got some serious jets and a disc wind to show off!   ๐Ÿ’ซ What’s So Special About HH 30? Ever heard of Herbig-Haro objects? No? Cool, neither did most of us until now! These are glowing gas clouds marking the tantrums of young stars as they spit out jets of gas at supersonic speeds. HH 30 is one of them, but with a twist—it’s a prototype edge-on disc, meaning we get a front-row seat to the magic of planet formation!   ๐Ÿ“ก Webb, Hubble & ALMA—The Ultimate Space Detective Team.   To break down HH 30’s secrets, astronomers went full detective mode using:   ✔️...

Solar Storm Shocker: Earth Gets a Cosmic Makeover with Two New Radiation Belts!

  The May 2024 solar storm formed two new radiation belts between the Van Allen Belts, with one containing protons, creating a unique composition never observed before. Picture this: May 2024, the Sun throws a massive tantrum, sending a solar storm hurtling toward Earth. The result? Stunning auroras light up the skies, GPS systems go haywire, and—wait for it—Earth gets two brand-new *temporary* radiation belts! That’s right, our planet just got a cosmic upgrade, thanks to the largest solar storm in two decades. And no, this isn’t a sci-fi movie plot—it’s real science, folks!   Thanks to NASA’s Colorado Inner Radiation Belt Experiment (CIRBE) satellite, scientists discovered these new belts, which are like Earth’s Van Allen Belts’ quirky cousins. Published on February 6, 2025, in the *Journal of Geophysical Research: Space Physics*, this discovery is a game-changer for space research, especially for protecting satellites and astronauts from solar storm shenanigans. ...

NASA/ESA Hubble Telescope Captures Image of Supernova to Aid Distance Measurements.

  The Hubble Space Telescope has recently captured a striking image of a supernova-hosting galaxy, located approximately 600 million light-years away in the constellation Gemini. This image, taken about two months after the discovery of supernova SN 2022aajn, reveals a bright blue dot at the center, signifying the explosive event. Although SN 2022aajn was first announced in November 2022, it has not yet been the subject of extensive research. However, Hubble's interest in this particular supernova lies in its classification as a Type Ia supernova, a type that is key to measuring cosmic distances. Type Ia supernovae occur when a star's core collapses, and they are particularly useful for astronomers because they have a predictable intrinsic brightness. No matter how far away a Type Ia supernova is, it emits the same amount of light. By comparing its observed brightness to this known luminosity, astronomers can calculate how far away the supernova — and its host galaxy — are from...