EidolonSpeak.com ~ Artificial Intelligence

Google | Wikipedia |
About |
Dialogue Space | Letters
subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link
subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link
subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link
subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link
subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link

AI & Artificial Cognitive Systems

small logo

Voices from the Neuroscience Community, Take 2

Voices from the Neuroscience Community, Take 2:

 

References and Endnotes

 

“…What sets us apart from other mammals, including other primates, in not any single structure – such as the hippocampus minor – but a set of circuits that includes the temporo-parieto-occipital junction (especially the angular and supramarginal gyri), the Wernickes area (concerned with semantics) and the anterior cingulate with its limbic connections (“attention,” “will,” “desire”) and the right parietal and insula (concerned with embodiment. These structures are for consciousness what chromosomes and DNA were for heredity. Know how they perform their individual operations, how they interact, and you will know what it means to be a conscious human being.” –V. Ramachandran [5]

 

“Our sense of coherence and unity as a single person may—or may not—require a single brain region, but if it does, reasonable candidates would include the insula and the inferior parietal lobule—each of which receives a convergence of multiple sensory inputs. I mentioned this idea to my colleague Francis Crick just before his death. With a sly conspiratorial wink he told me that a mysterious structure called the claustrum—a sheet of cells buried in the sides of the brain—also receives inputs from many brain regions, and may therefore mediate the unity of conscious experience. (Perhaps we are both right!)” –V. Ramachandran [5]

 

“Because of our emphasis on the behavior of neurons we have concentrated mainly on topics that can be studied in the macaque monkey, while including parallel work on humans. Thus, both language and dreams receive little to no emphasis. How would you study a monkey’s dreams? We have also avoided some of the more difficult aspects of consciousness, such as self-consciousness and emotion, and concentrated instead on perception, especially visual perception.” –F. Crick (see forward in Koch, [5])

 

“To cross the threshold from where we are to where we want to be, major conceptual shifts must take place in how we study the brain. One such shift will be from studying elementary processes—single proteins, single genes, and single cells—to studying systems properties—mechanisms made up of many proteins, complex systems of nerve cells, the functioning of whole organisms, and the interaction of groups of organisms. Cellular and molecular approaches will certainly continue to yield important information in the future, but they cannot by themselves unravel the secrets of internal representations in neural circuits or the interactions of circuits—the key steps linking cellular and molecular neuroscience to cognitive neuroscience.

To develop an approach that can relate neural systems to complex cognitive functions, we will have to move to the level of the neural circuit, and we will have to determine how patterns of activity in different neural circuits are brought together into a coherent representation. To study how we perceive and recall complex experiences, we will need to determine how neural networks are organized and how attention and conscious awareness regulate and reconfigure the actions of the neurons in those networks. Biology will therefore have to focus more on nonhuman primates and on human beings as the model systems of choice. For this, we will need imaging techniques that can resolve the activity of individual neurons and of neuronal networks.” –E. Kandel [5]

 

“Consciousness may require the embedding of contents into progressively higher-order contexts, both in space and time. This recursive embedding might be mediated by hierarchical binding of assemblies into higher-order arrangements, which could be achieved by multiplexing of interactions in different frequency bands. Such higher-order bindings could form the basis for “meta-representations” necessary to incorporate low-level contents into global world and self-models.” –Engle and Singer [21]

 

“A systematic and principled investigation of structural–functional relationships within the corticocortical system has been lacking. In order to investigate the relationship between anatomical connectivity and functional connectivity, we need to develop tools and measures to characterize both the structure of cortical networks as well as the dynamics of their activity. Structural aspects are captured using concepts and measures provided by graph theory. All structural analyses are performed on the network’s connection matrix, which provides a complete description of all connections and pathways between the network’s individual units. Dynamical aspects are captured by implementing networks as dynamical systems, with units interacting over time through their interconnections, giving rise to deviations from statistical independence (e.g. temporal correlations) in their activities. These deviations are summarized by the system’s covariance matrix, encapsulating its functional connectivity, which forms the basis for applying global statistical measures such as complexity.” –O. Sporns, G. Tononi, G.M. Edelman [22]

 

“A closer look at the anatomical and functional organization of the cerebral cortex provides important clues for formulating a potential general mechanism for neural integration. Within a cortical area, distant neuronal groups are linked by long-range intrinsic connections (also called tangential or horizontal connections), forming a dense patchy network. Between cortical areas, detailed anatomical studies have shown the existence of a meshwork of anatomical pathways, comprising bundles of axons linking vast numbers of neurons. Almost all of these pathways are reciprocal. These reciprocal pathways provide the structural substrate for “reentry,” the process of dynamic interactions between cell populations over large distances, leading to spatiotemporal correlations within and between cortical areas…Further support for the potential functional role of synchronized, temporally correlated activity in the cerebral cortex has come from a large number of neurophysiological experiments showing the existence of stimulus-dependent temporally correlated activity within areas of the visual, auditory, frontal and motor cortex…it appears that the occurrence of temporally correlated neuronal activity is a widespread phenomenon seen throughout the cerebral cortex of most mammalian species. What is the structural substrate of the extended patterns of temporal synchrony observed both within and between cortical areas? In principle, synchronized activity of two cell groups could be due to common input, or to dynamic coupling mediated by direct (usually reciprocal) connections…It is critical to show that synchronized activity is related to changes in behavioral or perceptual states…The hypothesis that temporal correlations may be involved in solving the binding problem has received additional support by a recent set of psychophysical studies.” –O. Sporns, G. Tononi, G.M. Edelman [22]

 

“What is the possible evolutionary origin of complexity in the nervous system? We noted that complexity expresses the capacity of cortical circuitry to support functional dynamics that integrates specialized elements. This capacity may well convey significant adaptive advantage to an organism as it allows perceptual, cognitive and behavioral states simultaneously to depend on multiple sensory modalities and submodalities.      W e observed in our simulation studies that intrinsic complexity tended to increase if populations of networks were subjected to selection during exposure to external stimuli (matching) or during production of similar output patterns by different subsets of circuitry (degeneracy). Unlike complexity per se, matching and degeneracy can both be thought of as conferring adaptive advantages to an organism. Matching captures the degree to which a network embodies the statistical structure of a sensory environment, allowing an organ- ism to respond quickly and efficiently to individual stimuli and to detect novel items if present. Degeneracy captures the ability of a network to affect outputs from many structurally different subsets of units, while maintaining the independence of their contributions and ensuring robustness. Our simulation studies have suggested that there is a general trend for complexity to increase upon selection for both matching and degeneracy that is independent of the configuration or size of the networks and their inputs and outputs. Complexity may thus be a product (or a by-product?) of selectional mechanisms favoring neural structures that integrate specialized information, match input stimuli or degenerate capabilities to yield particular output patterns. Characteristic structural motifs of the cerebral cortex such as anatomical groupings of areas, high incidence of cyclic paths, or short wiring length, may be the result of evolutionary selection for patterns of functional connectivity that allow neuronal group selection to accommodate the stringent demands of a changing, multi-modal environment.” –O. Sporns, G. Tononi, G.M. Edelman [22]

 

“There are three major topological arrangements in the brain that appear to be essential to understanding the brain’s global functioning. The first is a large, three-dimensional meshwork – the segregated yet integrated set of circuits constituting the thalamocortical system. The thalamus receives sensory and other inputs and is reciprocally connected to the cerebral cortex, and between different cortical regions through corticocortical fibers. The second topological arrangement is organized not at all like a meshwork but, rather, like a set of parallel, unidirectional chains that link the cortex to a set of its appendages, each with a special structure – the cerebellum, the basal ganglia, and the hippocampus. Although the specific ways in which these different cortical appendages interact with the cortex are of central importance, the appendages all seems to share a fundamental mode of organization (especially the cerebellum and basal ganglia): Long, parallel paths involving multiple synapses leave the cerebral cortex and reach successive synaptic stations within these cortical appendages and, whether they pass through the thalamus or not, they go back to the cortex. This serial polysynaptic architecture differs radically from that of the thalamocortical system: The connections are generally unidirectional, rather than reciprocal, and form long loops, and there are relatively few horizontal interactions among different circuits except for, possibly, those locally responsible for reciprocal inhibition. In short, this second arrangement seems admirably suited to the execution of a variety of complicated motor and cognitive routines, most of which are functionally insulated as possible from each other, a feature that guarantees speed and precision in their execution. The third topological arrangement resembles neither a meshwork nor a set of parallel chains, but a diffuse set of connections resembling a large fan. The origin of the fan is in a relatively small number of neurons that are concentrated in specific nuclei in the brainstem and hypothalamus, technically named according to the substance they release/project diffusively to huge portions of the brain. The ‘noradrenergic locus coeruleus’ for example consists of only thousands of neurons in the brainstem, but distributes a ‘hairnet’ of fibers all over the brain and can release the neuromodulator noradrenaline, the potential to influence up to billions of synapses. Neuromodulators are capable of influencing not only neural activity but neural/synaptic plasticity. Neurons belonging to these nuclei appear to fire whenever something important of salient occurs, such as a loud noise, a flash of light, or a sudden pain. Small alterations in the pharmacology of these specific nuclei cells can have drastic effects on global mental function.” –G.M. Edelman and G. Tononi [5]

 

“Neuroscience data (neuroanatomy and neural dynamics) provide strong grounds for so-called selectional theories of the brain – theories that actually depend upon variation to explain brain function.” –G.M. Edelman and G. Tononi [5]

 

“The brain contains a special set of nuclei with diffuse projections – the ‘value systems’ – which signal to the entire nervous system the occurrence of a salient event and influence changes in the strength of synapses. Systems with these crucial properties are typically not found in man-made devices, yet their importance for learning and adaptive behavior is well documented. Together with morphological peculiarities of the brain and its neural connections with specific bodily phenotype, these systems provide an animal with a large set of constraints whose role in fostering species-specific perceptual categorization and adaptive learning cannot be underestimated.” –G.M. Edelman and G. Tononi [5]

 

“Natural selection is reflected in the differential reproduction of fitter individuals in a species. In principle, selective events require the continual generation of diversity in repertoires of individual variants, the polling by environmental signals of these diverse repertoires, and the differential amplification or reproduction of those repertoire elements or individuals that match such signals better than their competition. Could it be that the brain follows such principles? We believe it does, [and propose] the theory of neuronal group selection (TNGS) or ‘Neural Darwinism.’ This theory embraces these selective principles and applies them to the functioning brain. Its main tenets are (1) the formation during brain development of a primary repertoire of highly variant neuronal groups that contribute to neuroanatomy (developmental selection), (2) the formation during experience of a secondary repertoire of facilitated neural circuits as a result of changes in the strength of connections or synapses (experiential selection), and (3) a process of reentrant signaling along reciprocal connections between and among distributed neuronal groups to assure the spatiotemporal correlation of selected neural events. Together, these three tenets of this global brain theory provide a powerful means for understanding the key neural interactions that contribute to consciousness.” –G.M. Edelman and G. Tononi [5]

 

“Darwinian principles turn out to be important even for basic understanding of brain functions, especially given the enormous variation in the structure and function of individual vertebrate brains. No two brains are alike, and the strengths of myriad individual synapses are constantly altered by experience. The extent of this enormous variability argues strongly against the notion that the brain is organized like a computer with fixed codes and registers. Moreover, the environment or world from which signals are delivered to the brain is not organized to give an unambiguous message like a piece of computer tape. There is no judge in nature giving out specific pronouncements on the brain’s potential or actual patterns and there is no homunculus inside the head deciding which pattern to should be chosen and interpreted. These facts are incompatible with the notion that the brain operates according to an unambiguous set of algorithms or instructions, like a computer. Instructionism, the idea that the environment can reliably provide the kind of information required by a computer, fails as a principle of brain operation. Yet in a given species, individual animals show certain consistent behaviors within the broad spread of individual responses.” –G.M. Edelman and G. Tononi [5]

 

“The correlation of selective events across the various maps of the brain occurs as a result of the dynamic process of reentry. Reentry allows an animal with a variable and uniquely individual nervous system to partition an unlabeled world into objects and events in the absence of a homunculus or computer program. Reentry leads to synchronization of the activity of neuronal groups in different brain maps, binding them into circuits capable of temporarily coherent output. Reentry is thus the central mechanism by which the spatiotemporal coordination of diverse sensory and motor events take place.” –G.M. Edelman and G. Tononi [5]

 

“If asked, ‘What characteristic uniquely differentiates higher brains from all other known objects or systems,’ we would say ‘reentrant organization.’ Note that while complex wide-area networks are beginning to share some properties with reentrant systems, such networks rely fundamentally on codes and, unlike brain networks, they are instructional, not selectional. It is important to emphasize that reentry is not feedback. Feedback occurs along a single fixed loop made of reciprocal connections using previously instructionally derived information for control and correction, such as an error signal. In contrast, reentry occurs in selectional systems across multiple parallel paths where information is not prespecified. Like feedback, reentry can be local (within a map) or global (among maps or whole regions).” –G.M. Edelman and G. Tononi [5]

 

Degeneracy is reflected in the capacity of structurally different components to yield similar outputs or results. In a selectional nervous system, with its enormous repertoire of variant neural circuits even within on brain area, degeneracy is inevitable. Without it, a selectional system, no matter how rich its diversity, would rapidly fail – in a species, almost all mutations would be lethal; in an immune system, too few antibody variants would work; and in the brain, if only one network path was available, signal traffic would fail. Degeneracy can operate at one level of organization or across many. It is seen in gene networks, in the immune system, in the brain, and in evolution itself. For example, combination of different genes can lead to the same structure, antibodies with different structures can recognize the same foreign molecule equally well, and different living forms can evolve to be equally well adapted to a specific environment. There are countless examples of degeneracy in the brain. The complex meshwork of connections in the thalamocortical system assures that a large number of neuronal groups can similarly affect, in one way or another, the output of a given subset of neurons. For example, a large number of different brain circuits can lead to the same motor output or action. Localized brain lesions often reveal alternative pathways that are capable of generating similar behaviors. Degeneracy also appears at the cellular level. Neural signaling mechanisms utilize a great variety of transmitters, receptors, enzymes, and so-called second messengers. The same changes in gene expression can be brought about by different combinations of these biochemical elements. Degeneracy is an unavoidable consequence of selectional mechanisms. The ability of natural selection to give rise to a large number of nonidentical structures yielding similar functions increases both the robustness of biological networks and their adaptability of unforeseen environments.” –G.M. Edelman and G. Tononi [5]

 

“As powerful as it is in providing alternative pathways for a given function, degeneracy cannot provide constraints for a selectional system; indeed, it is a relaxation of constraints. How can a selectional system achieve its goals without specific instructions? The necessary constraints or values are provided by a series of diverse phenotypic structures and neural circuits that have been selected during evolutionary time. We define values as phenotypic aspects of an organism that were selected during evolution and constrain somatic selective events, such as the synaptic changes that occur during brain development and experience. In higher vertebrates, a series of diffusely projecting neural value systems appear to have evolved that are capable of continually signally to neurons and synapses all over the brain. Such signals carry information about the ongoing behavioral state od the organism (sleep, waking, exploration, grooming, etc.), as well as the sudden occurrence of events that are salient for the entire organism (e.g., novel stimuli, painful stimuli, and rewards). These systems, whose importance vastly outweighs the proportion in the brain space they occupy, include the noradrenergic, serotoninergic, cholinergic, dopaminergic, and histaminergic cell nuclei.” –G.M. Edelman and G. Tononi [5]

 

“Memory in a degenerate selectional system is recategorical, not strictly replicative. There is no prior set of determinant codes governing the categories of memory, only the previous population structure of the network, the state of value systems, and the physical acts carried out at a given moment. The dynamic changes linking one set of circuits to another within the enormously varied neuroanatomical repertoires of the brain allow it to create a memory. The probability of creating a memory is enhanced by the activity of value systems…In our view, there are hundreds, if not thousands, of separate memory systems in the brain. These systems range from all the perceptual systems in different modalities – sight, smell, touch, etc. – to the systems that govern intended or actual movement, to the language systems that organize speech sounds. This view is compatible with various types of memory described and tested by experimentalists in the field – so-called procedural, semantic, episodic memories – but it does not restrict itself to these types, which are defined mainly by operational criteria and, to some degree, by biochemical criteria. Although such individual memory systems differ, the key conclusion is that whatever its form, memory itself is a system property. It cannot be equated exclusively with circuitry, with synaptic changes, with biochemistry, with value constraints, or with behavioral dynamics. Instead, it is the dynamic result of the interactions of all of these factors acting together, serving to select an output that repeats a performance or an act. The overall characteristics of a particular performance may be similar to a previous performance, but the times can be and usually are different. This property ensures that one can repeat the same act, despite remarkable changes in background and context, with ongoing experience…Besides guaranteeing association, the property of degeneracy also gives rise to the remarkable stability of memorial performance.” –G.M. Edelman and G. Tononi [5]

 

Primary consciousness – the ability to generate a mental scene in which a large amount of diverse information is integrated for the purpose of directing present or immediate behavior – occurs in animals with brain structures similar to ours. Such animals appear able to construct a mental scene but, unlike us, have limited semantic or symbol capabilities and no true language. Higher-order consciousness is built on the foundations provided by primary consciousness and is accompanied by a sense of self and the ability in the waking state explicitly to construct and connect past and future scenes. In its most developed form, it requires a semantic capability and a linguistic capability.” –G.M. Edelman and G. Tononi [5]

 

“Primary consciousness involves four [fundamental, but complex and interactive] processes: (1) Perceptual categorization, shared by all animals, is the ability to carve up the world of signals into categories useful for a given species in an environment that follows physical laws but itself contains no such [predetermined] categories. Along with control of movement, perceptual categorization is the most fundamental process of the vertebrate nervous system. (2) Concepts, the ability to combine different perceptual categorizations related to a scene or an object and to construct a “universal” reflecting the abstraction of some common feature across a variety of such percepts. [SL note: Some have called this “universal” an “invariant representation.” This may appear counter to Edelman et al.’s descriptions, given their dislike of the term representation, but I believe the distinction is merely a semantic one, and that the authors should have kept in mind that representations can take on forms other than symbolic or fixed encodings.] No simple combination of the maps that are reentrantly interactive to yield perceptual categorizations can lead to this abstractive capability. [SL note: No argument there! Cross modal processing is part of the dynamic complexity here.] What is required is higher-order mapping by the brain itself of the categories of brain activity in these various regions. (3) Memory, the capacity specifically to repeat or suppress a metal or physical act. That capacity arises from combinations of synaptic alterations in reentrant circuits. (4) Value constraints, to develop categorical responses that are adaptive. The diffuse ascending value systems of the brain are known to be richly connected to the concept-forming regions of the brain, notably the frontal and temporal cortex, but also to the so-called limbic system, a set of brain regions located on the medial (internal) side of the brain that form a circle around the brainstem. These regions affect the dynamics of individual memories, which, in turn, are established or not, depending on positive or negative value responses. The synaptic alterations that combine to develop various individual memories, collectively constituting a ‘value-category memory,’ are essential to a model of primary consciousness.” –G.M. Edelman and G. Tononi [5]

 

“Reentry is a process of ongoing parallel and recursive signaling between separate brain maps along massively parallel anatomical connections, most of which are reciprocal. It alters and is altered by the activity of the target areas it interconnects. It is not only the most important integrative mechanism in higher brains, but it is conceptually the most challenging…Integration can best be illustrated by considering exactly how functionally segregated maps in the cerebral cortex may operate coherently together even though there is no superordinate map or logically determined program…Our ability to act coherently in the presence of diverse, often conflicting, sensory stimuli requires a process of neural interaction across many levels of organization without any superordinate map to guide the process. This is the so-called binding problem: How can a set of diverse and functionally segregated maps cohere without a higher-order controller? Within a single area, linking must occur among various neuronal groups in the same feature domain or submodality. Examples are perceptual groupings within a map in sensing color or in another map sensing movement. At a higher level, binding must take place among different distributed maps, each of which is functionally segregated or specialized. Binding assures the integration of the neuronal responses to a particular object contour with its color, position, and direction of movement. Since there is no superordinate map that coordinates the binding of the participating maps, the question arises: How does binding actually take place? Models and simulations have shown that binding can occur as a result of reentry across brain maps that establishes short-term temporal correlations and synchrony among the activities of widely spaced neuronal groups in different maps. As a result, neurons in these groups fire at the same time. The selection of those circuits that are temporally correlated under the constraints of value lead to a coherent output. The binding principle, made possible by reentry, is repeated across many levels of brain organization and plays a central role in mechanisms leading to consciousness.” –G.M. Edelman and G. Tononi [5]

 

“The model of how primary consciousness arose in the course of evolution assumes that during evolution, the cortical systems leading to perceptual categorization were already in place before primary consciousness appeared. With further development of secondary cortical areas and the various cortical appendages, such as the basal ganglia, conceptual memory systems emerged. At a point in evolutionary time corresponding roughly to the transitions between reptiles and birds and reptiles and mammals, a critical new anatomical connectivity appeared. Massively reentrant connectivity arose between multimodal cortical areas carrying out perceptual categorization and the areas responsible for value-category memory. This evolutionarily derived reentrant connectively is implemented by several gran systems or corticocortical fibers linking one part of the cortex to the rest and by a large number of reciprocal connections between the cortex and the thalamus. The thalamocortical circuits mediating these reentrant interactions originate in the major subdivisions of the thalamus: structures known as the specific thalamic nuclei, the reticular nucleus, and the intralaminar nuclei. The specific nuclei of the thalamus are the ones reentrantly connected with the cerebral cortex; they do not communicate directly with each other, but the reticular nucleus has inhibitory connections with those nuclei and can act to select or gate various combinations of their activity. The intralaminar nuclei send diffuse projections to most areas of the cerebral cortex and help to synchronize its overall level of activity. All these thalamocortical structures and their reciprocal connections acting together via reentry lead to the creation of a conscious scene. The ongoing parallel input of signals from many different sensory modalities in a moving animal results in reentrant correlations among complexes of perceptual categories that are related to objects and events. The short-term memory that is fundamental to primary consciousness reflects previous categorical and conceptual experiences.” –G.M. Edelman and G. Tononi [5]

 

“Simulations reveal how reentry can solve the binding problem by coupling neuronal responses of distributed cortical areas to achieve synchronization and global coherence…in a detailed model of cortical areas, which included interconnected thalamic regions, we further examined the dynamics of reentrant interactions within the thalamocortical system. The results obtained from these simulations indicate that reentrant signaling within the cortex and thalamus, bolstered by fast changes in synaptic efficacy and spontaneous activity within the network, can rapidly establish a transient, globally coherent process. This process is characterized by strong and rapid interactions among participating neuronal groups in the cortex and the thalamus and it emerges at a well-defined threshold of activity. What is remarkable is that this coherent process is quite stable, being capable of sustaining itself continuously while changing its precise composition. This stability means that although there is always a large pool of synchronously firing neurons, the neurons that are actually engaged in this pool change from moment to moment. This process includes a large number of neurons in the cortex and thalamus, although it does not include all of them nor even all those that are active at a given time. That such a self-perpetuating dynamic process, characterized by the strength and speed of reentrant neural interactions, can originate from the connectivity of the thalamocortical system is of considerable significance for understanding the actual neural events that underlie the unity of consciousness [binding].” –G.M. Edelman and G. Tononi [5]

 

“It is a reasonable conjecture that the existence of synchronous activity among distant brain areas is often an indication of rapid functional clustering. Various studies using EEG, MEG, and local field potentials have revealed that the activity of large populations of neurons can be highly synchronized over a short period. Experiments using electrodes to record directly from neural cells in animals have demonstrated that short-term temporal correlations can be found within single areas of the brain, as well as between different areas. In some cases, it has even been demonstrated that short-term temporal correlations between two cerebral hemispheres are due to direct reentrant interactions. If the millions of reentrant fibers connecting the hemispheres are cut, these short-term correlations vanish. We take these findings as direct evidence that integration and rapid functional clustering occur in the thalamocortical system and that reentry is the major mechanism by which integration is achieved.” –G.M. Edelman and G. Tononi [5]

 

“There are literally billions of possible conscious states. The ability to differentiate among all these different states constitutes information, which is the reduction of uncertainty among a number of alternatives [Shannon and Weaver, 1963]. This argument implies that the selection within a short time of any particular integrated state out of such a large repertoire of different possible states is enormously informative…To measure the information generated by an integrated neural process, it is useful to rely on the statistical foundations of information theory, which provide a general means of assessing the reduction of uncertainty. We should note that a number of applications of information theory in biology have been fraught with problems and have had a notoriously controversial history. This is the case largely because at the heart of information theory as originally formulated lies the sensible notion of an external, intelligent observer who encodes messages using an alphabet of symbols. So-called information processing views of the brain have been severely criticized because they typically assume the existence in the world of previously defined information (begging the question of what information is) and often assume the existence of precise neural codes for which there is no evidence.” –G.M. Edelman and G. Tononi [5]

 

“According to our definition of neural complexity, the higher the mutual information between each subset and the rest of the system, the higher the value of complexity. Mutual information measures to what extent the entropy of a subset is accounted for by the entropy of the rest of the system (and vice versa) and thereby measures the statistical dependence between any subset and the rest of the system. Thus mutual information expresses how well the states of the subset can differentiate among the states of the rest of the system. High values of complexity correspond to an optimal synthesis of functional specialization and functional integration within a system. This is clearly the case for systems like the brain – different areas and groups of neurons do different things (they are differentiated) at the same time they interact to give rise to a unified conscious scene and to unified behaviors (they are integrated). By contrast, systems whose individual elements are either not integrated (such as a gas) or not specialized (like a homogeneous crystal) will have minimal complexity.” –G.M. Edelman and G. Tononi [5]

 

“A high degree of complexity [as mathematically defined] is a necessary requirement for any neural process that sustains conscious experience. Slow-wave sleep and epileptic seizures, two brain states characterized by highly integrated firing in most of the brain, are not associated with the conscious experience because their repertoire of available neural states is diminished and their complexity is low.” –G.M. Edelman and G. Tononi [5]

 

“From a selectionist perspective, although the consequences of diverse interactions in a population are unforeseen, they can serve as a basis for selection. Rather than being intractable or problematic, nonlinearities can be exploited whenever they lead to adaptive behaviors. As a general rule, the larger the number of components and the more extensive and nonlinear the interactions among them, the more use of selectional mechanisms becomes unavoidable. From this perspective, the complexity of the anatomy and chemistry of the brain and the sheer number of ongoing multiple interactions among specialized elements make the odds for effective operation via nonselectional means vanishingly small.–G.M. Edelman and G. Tononi [5]

 

“What must be determined theoretically and measured is how the intrinsic dynamic relationships among specialized neuronal groups in an adult brain become adaptively related, over time, to the statistical structure of the environment – the average, over time, of all signals characteristic of the environment received by the animal…Simulations with simple linear systems indicate that systems with random connectivity have low complexity values. However, if the connectivity of these systems is allowed to change through a selection procedure in such a way as to increase their match to the statistical regularities of an external environment, their complexity increases considerably. Moreover, everything else being equal, the more complex the environment, the larger the complexity of the systems that achieve high values of matching. It is this the adaptation of the brain’s reentrant circuits to the demand posed by a rich environment, based on principles of natural, developmental, and neural selection that leads to a high complexity, as reflected by increased values of matching and degeneracy. And it is only after such level of complexity has been achieved that an adult brain, even when relatively isolated as in dreaming [REM sleep], can generate integrated neural processes of sufficient complexity to sustain conscious experience.” –G.M. Edelman and G. Tononi [5]

 

“If integration and differentiation are indeed fundamental features of consciousness, they can be explained only by a distributed neural process, rather than by specific local properties of neurons. This leads to our dynamic core hypothesis: (1) A group of neurons can contribute directly to conscious experience only if it is part of a distributed functional cluster that, through reentrant interactions in the thalamocortical system, achieves high integration in hundreds of milliseconds; (2) To sustain conscious experience, it is essential that this functional cluster be highly differentiated, as indicated by high values of complexity. We call such a cluster of neuronal groups that are strongly interacting among themselves and that have distinct functional borders with the rest of the brain at the time scale of fractions of a second a dynamic core, to emphasize both its integration and its constantly changing composition. A dynamic core is therefore a process, not a thing or place, and it is defined in terms of neural interactions, rather than in terms of a specific neural location, connectivity, or activity. Although a dynamic core will have a spatial extension, it is in general spatially distributed, as well as changing in composition, and thus cannot be localized to a single place in the brain. Furthermore, even if a functional cluster with such properties is identified, we predict that it will be associated with conscious experience only if the reentrant interactions within it are sufficiently differentiated, as indicated by its complexity. While we envision that a functional cluster of sufficiently high complexity can be generated through reentrant interactions among neuronal groups distributed particularly within the thalamocortical system and possibly within other brain regions, such a cluster is neither coextensive with the entire brain nor is it restricted to any special subset of neurons. Thus, the term dynamic core deliberately does not refer to a unique, invariant set of areas of the brain (whether prefrontal, extrastriate, or striate cortex), and the core may change in composition over time. Since our hypothesis highlights the role of the functional interactions among distributed groups of neurons, rather than their local properties, it considers that the same group of neurons may sometimes be part of the dynamic core and underlie the conscious experience, but at other times may not be part of it an thus be involved in unconscious processes. Furthermore, since participation in the dynamic core depends on the rapidly shifting functional connectivity among groups of neurons, rather than on anatomical proximity, the composition of the core can transcend traditional anatomical boundaries. Finally, as suggested by imaging studies, the exact composition of the core related to particular conscious states is expected to vary significantly from person to person.” –G.M. Edelman and G. Tononi [5]

 

“The occurrence of a given conscious state selected among billions of others represents information, the fundamental sense of reducing uncertainty among a large number of possible choices. How much information is at stake is what our introduction of a measure for complexity intended to address…A system is complex if the mutual information between any subset and the rest is high, and sensitivity to context relates to whether the activity of each small subset is sensitive to whatever the different states of the rest of the system may be.” –G.M. Edelman and G. Tononi [5]

 

“The ability to be flexible in associating signals from different modalities and submodalities or from the present and the past is an important consequence of the dynamic nature of integration, as well as of the nonlinear mechanisms that mediate it. Once the opportunity for interaction among neuronal groups is maximized through the generation of the dynamic core, any subtle change in the activity of different regions of the brain can bring about new, dynamic associations. The ability to learn unexpected associations among a large variety of unconnected signals has obvious adaptive significance.” –G.M. Edelman and G. Tononi [5]

 

“If we are shown twelve digits arranged in four rows of three for less than 150 msec, we can consciously report only about four at a time. This remarkably strict ‘capacity limitation’ has led many to the conclusion that consciousness contains a small amount of information, just a few bits, corresponding, over time, to an information capacity of just about 1 to 16 bits per sec. This is an abysmal performance when judged according to engineering standards. We have argued that informativeness of consciousness should not be based on how many more or less independent ‘chunks’ of information a single conscious state might contain. Instead, it should be based on how many different conscious states are ruled out by the occurrence of the particular states that we are experiencing at a given moment. Since we can easily differentiate among billions of different conscious states within a fraction of a second, we have concluded that the informativeness of conscious experience must be extraordinarily high, better than any present-day engineer could dream of. How should we account then, for the so-called capacity limitation of consciousness? It seems that the observed capacity limitation is tightly linked to the integrated nature of conscious states. In terms of the dynamic core, such a capacity limitation reflects an upper limit on how many partially independent subprocesses can be sustained within the core without interfering with its integration and coherence. Indeed, it is likely that the same neural mechanisms responsible for the rapid integration of the dynamic core are also responsible for this capacity limitation…The need to generate a single integrated neural process within hundreds of milliseconds requires rapid and effective interactions among distributed groups of neurons. This need puts strict limits on how many partially independent processes can be accommodated at the same time.” –G.M. Edelman and G. Tononi [5]

 

“It is an interesting experimental question whether one may find evidence in such conditions as a large anatomical cut (as in a split brain or various neurological disconnection syndromes) or some major psychological trauma (as in psychiatric dissociation syndromes) of a corresponding split of a single, dominant dynamic core into two or more subcores.” –G.M. Edelman and G. Tononi [5]

 

“We are still lacking an important addition to the theoretical tools described: a way of estimating integration and differentiation (complexity) over such short times (hundreds of milliseconds). Measures of mutual information are relatively easy to obtain if a system is stationary, but other measures, most likely derivable from dynamical system theory and perturbation theory, may prove to be more appropriate over shorter periods.” –G.M. Edelman and G. Tononi [5]

 

“A number of experimental questions and predictions are generated by the dynamic core hypothesis. A central prediction is that during cognitive activities involving consciousness, one should find evidence in the conscious brain of a large but distinct set of distributed neuronal groups that interact over fractions of a second much more strongly among themselves than with the rest of the brain. This prediction could be tested by neurophysiological experiments designed to record electrical potentials from multiple neurons whose activity is correlated with the conscious experience. Multielectrode recordings have already indicated that rapid changes in the functional connectivity among distributed populations of neurons can occur independently of their firing rate. Studies of a small number of neurons in the frontal cortex of monkeys have also shown simultaneous shifts in the activity states involving some but not all recorded neurons. A convincing demonstration of rapid functional clustering among distributed neuronal groups requires that these studies be extended to larger populations of neurons in several areas of the brain. Another possibility would be to examine whether the effects of direct cortical microstimulation spread more widely in the brain if they are associated with conscious experience than if they are not. In humans, the extent and boundaries of neural populations exchanging coherent signals can be evaluated through methods of frequency tagging. By exploiting frequency tagging in binocular rivalry, for example, relatively direct approaches to the neural substrates of consciousness can be designed. Whether all aspects of the dynamic core hypothesis are correct or not, the criteria outlined should facilitate the design of similar experiments using imaging methods that offer both wide spatial coverage and high temporal resolution, including fMRI, topographic EEG, and MEG.” –G.M. Edelman and G. Tononi [5]

 

“We propose that perceived qualia, or each specific ‘quale,’ corresponds to a different state of the dynamic core, which can be differentiated from billions of other states within a neural space comprising a large number of dimensions. The relevant dimensions are given by the number of neuronal groups whose activities, integrated through reentrant interactions, constitute a dynamic core of high complexity. Qualia are therefore high-dimensional discriminations...Countless phenomena in psychophysics indicate that perception is not merely a reflection of immediate input but involves a construction or a comparison by the brain…A key implication of our hypothesis is that the legitimate neural reference space for conscious experience, any conscious experience, including that of color, is given not by the activity of any individual neuronal group (for example, a color-responsive neuronal group, as in the one group, one quale, hypothesis) or even by any small subset of neuronal groups (such as three sets of neuronal groups that are jointly sufficient for discriminating among all colors), but by the activity of the entire dynamic core…It is convenient to rephrase the dynamic core hypothesis in terms of an N-dimensional neural space, with N neuronal groups. Some of these dimensions correspond to neuronal groups that are color selective and exhibit color constancy. A large number of other dimensions correspond to the activity of neuronal groups specialized for visual form or visual motion, for auditory or somatosensory inputs, for proprioceptive inputs, for body schemas, and so on. These N neuronal groups constitute a functional cluster – over a short period, they are highly integrated among themselves and much less so with the rest of the brain. Since a functional cluster represents a single unified physical process, it follows that the activity of these N neuronal groups should be considered within a single reference space, with a common origin of all dimensions defining the core at that moment. It follows that such a reference space cannot be decomposed into independent subspaces (subsets of neuronal groups) without a loss of information with respect to other portions of the core. It follows that neuronal groups that are not part of the dynamic core should be considered as constituting separate neural spaces, since within that time scale they are effectively functionally disconnected to it.” –G.M. Edelman and G. Tononi [5]

 

“This view of perceived qualia differs radically from the ‘atomistic’ or ‘modular’ approaches proposed by others. According to our hypothesis, perceiving the redness of red absolutely requires a discrimination among integrated states of the entire dynamic core, and it can never emerge magically out of the firing of a single group of neurons that are endowed with some special local or intrinsic property. Our hypothesis can account for why the firing of other neuronal groups, such as those responding to blood pressure, does not appear to ‘generate’ any subjective experience or quale. Such neuronal groups, we propose, are not part of the dynamic core, which means that changes in their firing make a difference only locally, not in the context of a huge, N-dimensional space allowing for billions of discriminations.” –G.M. Edelman and G. Tononi [5]

 

“One corollary of our hypothesis is that every discriminable point in the N-dimensional space defined by the dynamic core identifies a conscious state, while a trajectory joining points in this space would correspond to a sequence of conscious states over time. Every different conscious state deserves to be called a quale, from the state of perceiving pure red, pure darkness, or pure pain, to the state of perceiving a complicated visual scene, and to the state of ‘thinking of Vienna.’ Another corollary is that the N-dimensional neural space that corresponds at any given time to the dynamic core is characterized by a certain metric – by precisely defined distances between points in that space. For example, axes corresponding to the visual submodality of color are close to each other and form a bundle, whereas the distance between more comprehensive bundles of different modalities, such as vision and touch, is even larger. In short, the phenomenological space obeys a certain metric within which certain conscious states are closer than others. The topology and metric of this space should be described in terms of the appropriate neural reference – the dynamic core – and must be based on interactions among neuronal groups participating in it.” –G.M. Edelman and G. Tononi [5]

 

“One can tell that a neuronal group is part of a functional cluster if, within a fraction of a second, a perturbation of its state can affect the state of the rest of the cluster. Concretely, this means that if the firing of a group of blue-selective neurons in IT is suddenly activated, their activation should be able to make a difference, within a fraction of a second, not just to the firing of neurons that are directly connected to them, but to the firing of scores of other neuronal groups that participate in the dynamic core. How can this remarkable distribution of causal efficacy among distributed groups of neurons occur in such a short time? We have suggested it occurs through the establishment of ongoing reentrant interactions among them. Only if groups of neurons are continuously exchanging signals back and forth and in parallel through reciprocal connections, thereby forming strong reentrant loops, can changes in the firing of any neuronal group be rapidly propagated to the entire functional cluster. This global spread of a perturbation within a complex system can be visualized within a model simulation that demonstrates that a small perturbation of a group of neurons can affect the entire system rapidly (within 100-200 msec). This phenomenon only occurs if the neurons are kept in a state of ‘readiness’ by ongoing activity, i.e., if reentrant loops between the thalamus and the cortex or between different cortical areas are ignited and voltage-dependent connections (those that require the postsynaptic neuron to be excited in order to be activated) are actually activated. By contrast, if these reentrant loops are not ignited, the effects of the same perturbation remain much more local.” –G.M. Edelman and G. Tononi [5]

 

“Unconscious aspects of mental activity, such as motor and cognitive routines, and so-called unconscious memories, intentions, and expectations, play a fundamental role in shaping and directing conscious experience. The dynamic core hypothesis is heuristically useful not only because it specifies the kinds of neural processes that underlie conscious experience, but because it provides a rationale for distinguishing these processes from those that remain unconscious. The neural processes involved in the regulation of blood pressure do not and cannot contribute to conscious experience – according to our hypothesis, they are not part of the dynamic core – an integrated process that is largely in the thalamocortical system – and they do not, by themselves, generate an integrated neural space of sufficient dimensionality and complexity, the circuits that regulate blood pressure constitute what is essentially a simple reflexive arc.” –G.M. Edelman and G. Tononi [5]

 

“If neurons in layer V of a motor or premotor area that participate in the core can function as ‘ports out’ by activating motoneurons, why should the motoneurons with which they interact not be considered part of the core? The answer is simple. The functional interactions between neuronal groups in the dynamic core and motoneurons are exclusively one-way. Why are we ordinarily not conscious of the constantly shifting activity and of the millions of interactions that occur among neurons in the sensory periphery but are conscious of certain aspects of their outcome? For example, while we are aware of the colors of objects, we are not aware of the rapidly changing activity of various kinds of cones in our retinas. Unlike the firing of motoneurons, the firing of sensory neurons is clearly able to influence the dynamic core and to determine what we ultimately perceive. But if these processes can influence the core, why do they not become part of it? The answer, again, is simple. There is reason to believe that the connectivity among neurons in the retina and in other early stages of the visual system is such that much of the activity at these sites remains relatively insulated or local…According to this scheme, no information or trace of the activity occurring in the sensory periphery reaches the core above and beyond what is transmitted by ‘ports in’ of the core.” –G.M. Edelman and G. Tononi [5]

 

“The series of synaptic steps from the cortex to the basal ganglia to the thalamus and back to the cortex makes up a special kind of loop, quite unlike the reentrant loops among reciprocally connected groups of neurons that are characteristic of the thalamocortical system. First, the loops through the basal ganglia are long and include multiple synaptic steps, some of which are inhibitory. Second, these long loops are one-way, rather than two-way or reentrant. Third, the various long cortico-basal ganglia-cortico loops seem to be organized in parallel, having distinct areas of origin and termination in the cortex, as if they were meant to interact with each other as little as possible. Such a parallel organization contrasts sharply with the maze of connected reentrant circuits found in the thalamocortical system, there the architecture seems ideally suited to favor simultaneous interactions among thousands of distributed neuronal groups. Recording studies have provided exceptionally clear evidence that the resulting functional connectivity within the basal ganglia is radically different from that in the thalamocortical system. It appears that in the basal ganglia, different neurons are organized in parallel loops that are independent from each other and thus do not engage in the kind of cross talk one sees in the cortex. Thus, the long, one-way parallel loops found in the basal ganglia seem to be just the architecture on would envision to implement a variety of independent, unconscious neural routines and subroutines: They are triggered by the core at specific ‘ports out;’ they do their jobs rapidly and efficiently but in a local, functionally insulated way; and at ‘ports in,’ they provide the core with the results of their activation.” –G.M. Edelman and G. Tononi [5]

 

“The idea that the parallel loops through the basal ganglia (and through the cerebellum) may be involved in setting up and executing neural routines is not novel, and is well established in the case of the basal ganglia. What is novel is the suggestion that the type of connectivity typical of basal ganglia and similar loops – which are functionally insulated and connected to the dynamic core only at ports out and in – may be the key to why such routines are unconscious. This suggestion would have several important implications. For example, it is becoming increasingly clear that loops through the basal ganglia are not only involved in motor routines, but that depending on which part f the cortex they are from, they may be involved in various kinds of cognitive activities.” –G.M. Edelman and G. Tononi [5]

 

“The processes leading to the linkage and nesting of otherwise independent elementary routines within entire motor or cognitive sequences (the elements being single sweeps through a loop in the basal ganglia) could occur by setting up such links among loops at the level of the basal ganglia themselves, or more likely, at the level of the cortex. Because of its enormous associative capabilities, the dynamic core would be in an ideal position to link or hierarchically organize a series of preexisting unconscious routines into a particular sequence… In a full description, one must conceive of the mutual functional interactions of all cortical appendage structures (basal ganglia, cerebellum, etc.) in the ongoing relations between conscious and unconscious activity.” –G.M. Edelman and G. Tononi [5]

 

“Global mappings are activated when the core links, through its ports, a series of unconscious routines implemented by cortical appendages into higher-order entities that subserve an integrated sequence of sensorimotor actions. According to this view, our cognitive life is typically constituted by an ongoing sequence of core states that trigger certain unconscious routines, which in turn trigger certain other core states and so on in a series of cycles, with core states also modified by sensory input (acting on other ports in), as well as partly by the intrinsic dynamics of the core itself. This view has several implications related to processes of conscious learning. Before behavior or thought became automatized and unconscious, there is a phase of conscious control in which behavioral or cognitive fragments are first painstakingly performed on by one and then linked until a single ‘chunk’ of automated behavior can be flawlessly and effortlessly executed.” –G.M. Edelman and G. Tononi [5]

 

“Evidence indicates that the neural selection leading to the functional insulation of dedicated loops and circuits occurs through the reinforcement of basal ganglia circuits brought about by the firing of value systems, especially the so-called dopaminergic system (named after the neuromodulator dopamine that it releases). Such systems have been shown to fire whenever behavior is being reinforced by a reward and to stop firing once the behavior has been acquired. The firing of the dopaminergic system and of other value systems that innervate both the basal ganglia and the cortical regions to which they project may also be the key mechanism for strengthening synaptic linkages that are forged between different routines through the mediation of the core. In this way, global mappings that are related to a particular task can be constructed or linked during consciously guided learning until a smooth, apparently effortless sensorimotor loop is executed speedily, reliably, and unconsciously.” –G.M. Edelman and G. Tononi [5]

 

“An interesting possibility would be that in people with hysterical blindness, a small functional cluster that includes certain visual areas is autonomously active, may not fuse with the dominant functional cluster, but is still capable of accessing motor routines in the basal ganglia and elsewhere. Something of this sort is clearly going on in people with split brains, in whom at least two functional clusters appear to coexist in the same brains because of the callosal disconnection.” –G.M. Edelman and G. Tononi [5]

 

“According to the theory of neuronal group selection (TNGS), the repertoires of different brain areas, operating according to selection, are sufficiently plastic to adapt somatically to a wider range of bodily phenotypic changes, such as the emergence of a supralaryngeal space. This plasticity relieves us of the genetic and evolutionary dilemma of requiring simultaneous correlated mutations that are reflected both in the altered body parts and in corresponding altered neuronal mappings. Of course, subsequent to the somatic adjustment of the brain to a mutation affecting the body, later mutations in neutrally significant genes could then accumulate evolutionarily to the advantage of the organism.” –G.M. Edelman and G. Tononi [5]

 

“We have made the case that a remarkable property of conscious states is their informativeness: The occurrence of a particular conscious state rules out, in a fraction of a second, an enormous number of possibilities. This ruling out process represents the integration of an extraordinary amount of information in a short time. This capability has not been matched by any of our inventions, including the computer. It certainly did not spring forth without evolutionary precedent. Rather, it arose from structures and systems that were reconfigured over millions of years as a result of natural selection. Indeed, a defensible case can be made that conscious brains are the most creative sources of information in all of nature. But just as there are different phenotypes, there are different sources and types of information…It seems clear that systems that are capable of actually processing information appeared for the first time as a result of natural selection. In this view, the presence of heritability, variation, and selection are critical factors in the emergence of information. This emergence involves a kind of matching and stabilization of responses to environmental states. But although heritable processes are concerned with the transition to the living from the nonliving, they still do not involve a sentient observer…It is noteworthy that the selective events that led to the genetic code followed a different set of rules than the laws of chemistry and physics that govern the covalent bonding of nucleic acids. For a set of Darwinian rules to apply certainly required the existence of stable covalent chemical bonds to ensure growth of nucleic acid polymers, that such polymers could be replicated, and that mutations could occur. But the ingredient that supervened over the laws of chemistry and physics was that selection for fitness in the phenotype could stabilize some DNA and RNA sequences over others. Such code sequences represent historical residues of genetically irreversible selection events that acted on whole organisms at a much higher level of organization than DNA itself. So the actual nucleotide sequences of genes reflect historical events, as well as chemical laws, and both together ultimately constrained how processing of information eventually arose in nature.” –G.M. Edelman and G. Tononi [5]

 

“The ability to repeat a performance with variation under changing contexts actually first appeared with the emergence of life, of self-replicating system under natural selection. The continued action of natural selection during evolution then gave rise to a variety of systems for which memory was critical, each having different structures within a given animal species. Example of such structures range from the immune system to reflexes and, finally, to consciousness. In this view, there are as many memory systems as there are systems capable of autocorrelation with their previous states over time, whether such systems are constituted by DNA itself or by the phenotype it constrains. It is morphology that underlies the particular properties of any memory system. Memory itself is a system property that allows the binding in time of selected characteristics having adaptive value. Memory in selectional systems, from the genetic code to consciousness, may be seen as the great binding principle in the biological domain. Calling any manifestation of biological order or memory ‘information’ may not be as useful as requiring that some symbolic exchange or, at least, signification must be involved in actual informational transactions.” –G.M. Edelman and G. Tononi [5]

 

“If the brain is not a Turing machine, we need another explanation for its workings. That explanation is provided by the theory of neuronal group selection (TNGS). As we have shown, a series of simulations based on TNGS can actually carry out pattern recognition and perceptual categorization. A large body of disparate experimental evidence not only indicates selectional events occur in the brain, but suggests that much of that apparently disparate evidence can actually be reconciled by an analysis of such events. We have taken it as established that after the brain arose in evolution by natural selection, which set up value constraints and major structures, each individual brain operates by a process of somatic selection. Instead of being guided mainly by a set of effective procedures or instructions, it is governed by a degenerate set of effective structures, the dynamics of which allow its correlated activities to arise by selection, rather than by the rules of logic. Clearly, if the brain evolved in such a fashion, and this evolution provided the biological basis for the eventual discovery and refinement of logical systems in human culture, then we may conclude that, in the generative sense, selection is more powerful than logic. It is selection – natural and somatic – that gave rise to language and to metaphor, and it is selection, not logic, that underlies pattern recognition and thinking in metaphorical terms. Thought is thus ultimately based on our bodily interactions and structure, and its powers are therefore limited to some degree. Our capacity for pattern recognition may nevertheless exceed the power to prove propositions by logical means. Indeed, conscious human thought can create new axioms, which a computer cannot do This realization does not imply that selection can take the place of logic, nor does it deny the enormous power of logical operations…We must be skeptical about extreme reductionist accounts that attempt to explain consciousness on the basis of quantum mechanics but ignore the facts of evolution and neurology. –G.M. Edelman and G. Tononi [5]

 

“Most systems of the brain are plastic, that is, modifiable by experience, which means that the synapses involved are changed by experience. But, as the fear example shows, learning is not the function that those systems originally were designed to perform. They were built instead to accomplish certain tasks (like detecting danger, finding food and mates, hearing sounds, or moving a limb toward some desired object). Learning (synaptic plasticity) is just a feature that helps them do their job better.

Plasticity in all the brain's systems is an innately determined characteristic. This may sound like a nature-nurture contradiction, but it is not. An innate capacity for synapses to record and store information is what allows systems to encode experiences. If the synapses of a particular brain system cannot change, this system will not have the ability to be modified by experience and to maintain the modified state. As a result, the organism will not be able to learn and remember through the functioning of that system. All learning, in other words, depends on the operation of genetically programmed capacities to learn.” –J. LeDoux [5]

 

“My emphasis on the importance of synapses in brain function is not intended to minimize the role of other factors. For example, the rate at which a cell fires spontaneously is a function of certain electrical and chemical characteristics of the cell. These are called intrinsic properties to distinguish them from extrinsic influences from other cells mediated by synaptic transmission and modulation. A cell's intrinsic properties, which may have a strong genetic component, will greatly influence everything that cell does, including its participation in synaptic transmission. But because psychological and behavioral functions are mediated by aggregates of cells joined by synapses and working together rather than by individual neurons in isolation, the contribution of the intrinsic properties of a cell to mental life or behavior occurs only by way of the role of that cell in circuits. While synapses themselves don't account for everything the brain does, they do participate crucially in every act or thought that we have, and in every emotion we express and experience. Synapses are ultimately the key to the brain's many functions, and thus to the self.” –J. LeDoux [5]

 

“Three of the main tenets of neural selectionism are described by the terms exuberance (more synapses are made than are kept), use (the synapses that are kept are the ones that are active), and subtraction (connections not used are eliminated). Building the nervous system in this fashion is thought to provide a means of coping with the paucity of information available to the brain from other sources, such as the outside world, during early development. If the selectionists are right, the connections we end up with as adults are those that were not subtracted-the self, they would say, is not constructed, it is selected from preexisting possibilities…It is probably best to think of instruction and selection as two complementary means by which circuits can be constructed rather than as mutually exclusive theories of brain development.” –J. LeDoux [5]

 

“Learning and development are two sides of the same coin. We can't learn until we have synapses. And as soon as synapses start forming on the basis of intrinsic commands, they are subject to being influenced by our worldly experiences. Genes, environment, selection, instruction, learning – these all contribute to the building of the brain and the shaping of the developing self by wiring synapses. Although the extensive plasticity that is present in early life eventually stops, our synapses do not stop changing, but remain subtly changeable by experience.” –J. LeDoux [5]

 

“Underlying much of the work on long-term potentiation (LTP) is the assumption that LTP is not just a way to study how experience changes synapses, but is, in fact, the way that synapses are changed when we learn. The various findings indicating that the same molecules are involved in LTP and memory are consistent with the view that LTP occurs during learning, but this evidence is, as critics have noted, circumstantial. In an effort to silence these critics, LTP researchers have attempted to demonstrate that something like LTP does take place in the hippocampus during learning.” –J. LeDoux [5]

 

“It is now known that the postsynaptic cell does participate in associative classical conditioning.

Researchers argue that classical conditioning in the Aplysia does indeed involve Hebbian synaptic changes, and probably also involves NMDA receptors, in addition to an activity-dependent enhancement of presynaptic facilitation. One way of understanding this is that nonassociative (non-Hebbian) presynaptic facilitation functions as the presynaptic component of associative (Hebbian) plasticity. Considerable work has also been conducted on the molecular basis of classical conditioning in the Aplysia. While some of the early steps are different from those involved in sensitization, the long-term changes involve phosphorylation of CREB by protein kinases and synthesis of new proteins by CREB-activated genes. The establishment of long-term plasticity seems to follow a common pattern even when it is triggered by different kinds of short-term changes. The most obvious differences between associative conditioning in the Aplysia and plasticity in the mammalian brain is thus the importance of presynaptic plasticity in rhe Aplysia. The recent finding that NMDA-mediated plasticity in the postsynaptic cell is also significant in Aplysia plasticity helps close the gap. And the possibility that fear conditioning, amygdala LTP, and several forms of hippocampal plasticity might, like plasticity in Aplysia, involve presynaptic as well as postsynaptic changes further strengthens the notion that similar mechanisms are used to make memories in diverse species, suggesting that memory mechanisms are conserved across many levels of evolution.” –J. LeDoux [5]

 

“How does working memory function at the level of cells and synapses? While this process is still not fully understood, we can at least begin to piece together an explanation. The prefrontal cortex, like other areas of the neocortex, has six layers. And, as in other areas, the middle layers tend to receive inputs from other regions, while the deep layers tend to send outputs to other regions. So axons from other cortical areas, such as areas involved in ‘what’ and ‘where’ processing, form synapses on cells in the middle layers of prefrontal cortex. These input cells then send axons to cells in the deep layers, which give rise to connections that go back to the middle-layer cells, or to other cortical or subcortical areas, especially areas involved in the control of movement, and thus of behavioral responses. In this manner, the deep and middle layers can influence each other. In addition, though, input cells in the middle layers and output cells in the deep layers each give rise to local connections to other cells in the same layer. This arrangement allows the input cells to influence other input cells, and output cells to influence other output cells. Transmission of inputs to and outputs from the prefrontal cortex, and between cells within and between layers within the prefrontal cortex, is mediated by the binding of presynaprically-released glutamate to postsynaptic receptors. Interestingly, extrinsic inputs to these circuits account for only a small part of the excitatory synaptic connectivity of the prefrontal cortex. The connections within the prefrontal cortex, both within and between layers, are far more numerous than the connections corning in from other areas, such as sensory processing regions. The mutual excitations mediated by the internal connections enable input signals from the outside to be amplified and kept active, and may well contribute to the sustained activity that has been observed during delay periods.” –J. LeDoux [5]

 

“The Seven Principles of Synaptic Self-Assembly: (1) Different systems experience the same world; (2) Synchrony coordinates parallel plasticity; (3) Parallel plasticity is also coordinated by modulatory systems; (4) Convergence zones integrate parallel plasticity; (5) Downwardly mobile thoughts coordinate parallel plasticity; (6) Emotional states monopolize brain resources; (7) Implicit and explicit aspects of self overlap, but not completely.” –J. LeDoux [5]

 

“A convergence zone (CZ) is a region that receives inputs from other brain regions and that integrates the information separately processed by the other regions. Important convergence zones are located in the prefrontal cortex. Once information is integrated, it can then be used to influence the activity of the input regions. These are examples of bottom-up and top-down processing. The ability of working memory to integrate information from various systems and hold that information temporarily for the purpose of performing mental operations (comparing, contrasting, recognizing) is a typical bottom-up process, and the ability of working memory to use the outcome of this processing to regulate what we attend to is a typical top-down or executive function…Many kinds of animals have multiple independent learning systems that can be coerced into learning simultaneously by modulatory chemicals and synchronous firing, but only some animals have convergence zones in their cortex. The cognitive sophistication of a mammalian species, in fact, is nicely predicted by the extent of convergence that occurs in its cortex – more is present in humans than in monkeys, for example, and more in monkeys than in rats. When plasticity occurs simultaneously in two regions that feed into a convergence zone, plasticity is also likely to occur in the convergence zone since it will be the recipient of the high level of activity that occurs when plasticity is being established in the individual regions. Obviously, synchrony and modulation also influence convergence zones, further increasing their potential to integrate information across systems.” –J. LeDoux [5]

 

“I believe the key to mental unity lies within the unique nature of the neural hierarchy and its relationship to consciousness…lower levels of the neural hierarchy, in contrast to the body in general, are not physically nested within higher levels. This the thalamus, which appears at a higher level, although it is reciprocally connected to the lower levels of the midbrain, is not physically composed of the midbrain in the same way that say the liver, a higher level, is composed of liver cells at a lower level of organization. And the cortex, although it is reciprocally connected to the thalamus is not physically composed of the midbrain. These hierarchically arranged levels are physically distinct and not physically nested within each other. In contrast, when we consider the higher levels of the nested hierarchy, consciousness is unified since lower elements in the neural hierarchy are functionally nested, bound, and unified within the higher levels on the hierarchy…the structure of the nervous system, unlike the form of the body in general, is not physically a nested hierarchy; like the brain’s functions – it’s process – globally operates in a hierarchically nested and unified fashion…the ontological realities of these two viewpoints upon the same organ at the same time are ontologically non-reducible. These two views are scientifically compatible, but they are mutually irreducible…this is a feature of the brain that is entirely unique within nature.” –Feinberg [5]

 

“In contrast to the weak version of emergence, the most important aspect of the strong or “radical” form of emergence theory is the claim that there are emergent properties that cannot in principle ever be reduced to the component parts of the system. A theory of radical emergence when applied to the nervous system would claim that consciousness can never be reduced by virtue of being a radically emergent feature. The emergence hypothesis is appealing for several reasons. The integrated mind and the unified self entail the “highest” and most advanced forms of cognitive processing, and the phylogenetically most advanced regions of the nervous system. For example, the prefrontal cortices are situated “higher” or farther “downstream” on the neuroaxis when compared to other regions, such as the midbrain, that are situated at the earliest stages of perceptual processing…due to the hierarchical arrangement of neuronal processing streams, neurons located in anatomically “higher” positions within the neuroaxis possess “higher,” most abstract, and more integrated response characteristics than neurons positioned earlier (upstream) in perceptual pathways.” –Feinberg [5]

 

“When we perceive a stimulus that has a combination of sensory qualities, such as color, sound, and smell, it is clear that an enormous number of brain regions at a vast range of hierarchical levels must be involved, and it cannot be the case that cells that represent all these different qualities ultimately converge on any single neuron. There are simply too many perceptual qualities and too few neurons. This strategy may work for early stages of integration and highly specialized perceptual functions such as vision, but it cannot apply as a generalized mechanism for all instances of binding in consciousness. In fact, most neurons in these higher perceptual regions actually do not display the specificity of pontifical neurons. By some estimates, out of one billion medial temporal neurons that could potentially demonstrate such specific response properties, fewer than two million actually do so, and some of the neurons that do show some degree of specificity do not show complete response exclusivity.” –Feinberg [5]

 

“What is true of the unification of sensory perception also applies to the synthesis of intended actions, which poses an equally important problem for a theory of mental unity. When we act, we subjectively feel there is a single self that is the source of what we do. In spite of this subjective sense of unity, neuroanatomical analysis reveals that the motor system is composed of millions of individual neurons that act in unified fashion to achieve specific goals. The control of action is not entirely at the highest, most explicitly conscious, levels of the neural hierarchy but is distributed across multiple hierarchical levels. While these neurons are organized to produce exquisitely integrated and unified actions, as with the visual system, there is not single region or point in space at the top of the motor hierarchy to explain this unity.” –Feinberg [5]

 

“The problem before us is this: in order to explain how the neural hierarchy operates, an adequate model must explain how the brain could simultaneously (1) be physically distributed across multiple connected but anatomically discrete levels; (2) allow for the creation of higher order integrated (whole) and abstract conscious awareness; and (3) ensure that both lower and higher levels of the hierarchy make a contribution to the entire mental experience.” –Feinberg [5]

 

“In a nested hierarchy the entities or holons at the higher levels of the hierarchy are physically composed of the entities at the lower levels. Although higher levels of the system constrain lower levels, does not emanate from any unified or centralized holon.” –Feinberg [5]

 

“One of the defining features of a hierarchy is as a system of communication, where holons with slow behavior are at the topmost echelons of the hierarchy, while entities with faster behavior are at the lower levels. Because higher level entities function at slower frequencies than lower, they serve as the relatively invariant context, or the functional higher order background, for lower level entities. It takes a long time for the state of the entire organism to affect a single cell of the body…In a nested hierarchy, the relative invariance of higher levels is one way that higher levels constrain lower levels over relatively long time frames. –Feinberg [5]

 

“The simplest natural hierarchy that can be analyzed is an organism comprised of a single cell or a collection of cells lacking a nervous system [the soma minus the brain], from an amoeba or paramecium to a single liver cell within the human body to an entire organ. These are all examples of nested hierarchies, where all the subcellular and cellular constituents of the cell contribute to the various life-sustaining processes of the organism, but the “life” of the entire organism is a collective or “emergent” property of the entire system. In a simple organism without a nervous system, there is no collective top-down constraint, nor is there a centralization of the action of al the parts. This does not mean the parts do not interact, nor does it mean there is not constraint of the entire system upon its parts, but rather that there is no single, unified, or overall controlling entity at the top of the hierarchy…this is constraint without convergence, and without convergence, there can be no central representation, self or consciousness.” –Feinberg [5]

 

“The neural cortex is physically composed of its constituent neurons, and like other less complex systems in the body, these neural systems can operate reflexively but lack consciousness [reflexes without consciousness]. When the reflex and the pathways and centers that create it are considered as a system, we see that while the lower order elements in the reflex – retina, optic nerve, midbrain nuclei, and cranial nerve – are not physically nested within each other, they are functionally nested within the overall reflexive system. Further, while higher and lower levels of the neuroaxis operate in conjunction to create the overall reflex, one critical distinction of this level hierarchy is that centralization of levels is not necessary for reflexes to operate…while higher level brain damage may influence the force of a lower level knee jerk reflex, even in the absence of the brain these lower level spinal reflexes can operate…while parts of the reflex may display functionally nested properties, there is no centralization of levels, and therefore there can be no central representation of levels, and hence no consciousness or self.” –Feinberg [5]

 

“When we analyze the functions of the nervous system during behaviors that entail consciousness [central nervous system with consciousness], there is a pattern of centralization of the activity of the entire system that is characteristic of a non-nested hierarchy. All lower level information is conveyed ultimately to the highest levels of the organization, and the highest levels of the hierarchy have more abstract and integrated responses characteristics relative to lower levels, which have more specific and narrow information and more abstract response characteristics than neurons position earlier (upstream) in perceptual pathways. This process of topical convergence can produce “higher order” cells that possess amazingly abstract and specific response properties, typical of a non-nested hierarchy, where a “grandmother” cell is activated by a given higher level property stimulus, such as a face. However, this same system also demonstrates at the same time patterns that are characteristic of nested hierarchies: the conscious representation of a face requires contributions from diverse and widely separated brain regions at upper and lower levels of the system; all lower order elements that make up total awareness of the face make a contribution to consciousness, or lower-order features combine in consciousness as nested within higher order features….All of these various and anatomically distributed representations are bound together to create the entire conscious experience. Therefore, to say one element is “bound” to another is simply another way of saying that they are represented in awareness dependently and are nested together. Lower order elements are more tightly bound to each other than higher order elements. At the highest levels of the hierarchy, color, shape, and movement are nested together within the image of the apple, and this image is in turn nested within the entire scene (apple falling from a tree)…What enters into awareness ultimately is the entire nested experience, with all aspects nested within consciousness in spite of the widely distributed neural substrate of experience. If consciousness were not organized as a nested hierarchy, the unified aspects of awareness would not be possible…Therefore, where any conscious and voluntary system is concerned, the hierarchy displays features of both non-nested and nested hierarchies, and in this regard appears unique among biological and non-biological systems. Only the nervous system operates such that it simultaneously functions in both a nested and non-nested fashion at both the neurocellular and global levels.” –Feinberg [5]

 

“In one study, cats with strabismus and binocular rivalry showed that when one or the other eye was responsible for the information that reached awareness (the dominant eye) the neurons in the early stages of visual awareness fired in synchrony. At the same time, the neurons responsive to the suppressed eye that did not reach awareness showed no temporal correlated firing. This correlation between synchronized temporal firing and conscious awareness has been demonstrated in numerous investigations across various species and in motor and memory systems in addition to all sensory systems. These researchers do not claim that synchrony is the only mechanism for creating consciousness, but these data indicate that temporal synchrony plays an important role. With the nested model I propose, temporal synchrony could provide the constraint necessary for perceptual binding and serve as an essential mechanism for structuring the hierarchical arrangements of consciousness and self.” –Feinberg [5]

 

“Once a central nervous system with consciousness has been achieved, the fundamental parameters of consciousness change in degree but do not change in kind. The highest level of self attained – the self that we think of as the human self – is characterized by higher degrees of meaning and purpose that can only be achieved by the increasingly complex hierarchy of the human nervous system [brain with consciousness and self-awareness]. In perception, the highest level of meaning provides the hierarchical constraint for billions of neurons involved in human perception, and the highest levels of purpose provide the constraint for the most complex human actions. These two elements – meaning and purpose – are both mad possible by the hierarchical organization of the brain.” –Feinberg [5]

 

“Problems of inferring the causes of sensory input (perceptual inference) and learning the relationship between input and cause (perceptual learning) can be resolved using exactly the same principle. Specifically, both inference and learning rest on minimizing the brain’s free energy, as defined in statistical physics. Furthermore, inference and learning can proceed in a biologically plausible fashion. Cortical responses can be seen as the brain’s attempt to minimize the free energy induced by a stimulus and thereby encode the most likely cause of that stimulus. Similarly, learning emerges from changes in synaptic efficacy that minimize the free energy, averaged over all stimuli encountered. The underlying scheme rests on empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organization and responses. Many apparently unrelated anatomical, physiological and psychophysical attributes of the brain [can be combined] within a single theoretical perspective.” –K. Friston [23]

 

“It should be noted that the hierarchical ordering of areas is a matter of debate and may be indeterminate. Based on computational neuroanatomic studies, the laminar hierarchical constraints presently available in the anatomical literature are ‘insufficient to constrain a unique ordering’ for any of the sensory systems analyzed. However, basic hierarchical principles were evident. ‘All the cortical systems studied displayed a significant degree of hierarchical organization’ with the visual and somatomotor systems showing an organization that was ‘surprisingly strictly hierarchical’.

In the post-developmental period, synaptic plasticity is an important functional attribute of connections in the brain and is thought to subserve perceptual and procedural learning and memory. This is a large and fascinating field that ranges from molecules to maps. Changing the strength of connections between neurons is widely assumed to be the mechanism by which memory traces are encoded and stored in the central nervous system. In its most general form, the synaptic plasticity and memory hypothesis states that, ‘Activity-dependent synaptic plasticity is induced at appropriate synapses during memory formation and is both necessary and sufficient for the information storage underlying the type of memory mediated by the brain area in which that plasticity is observed’. A key aspect of this plasticity is that it is generally associative. Synaptic plasticity may be transient (e.g. short-term potentiation or depression) or enduring (e.g. long-term potentiation or depression) with many different time constants. In contrast to short-term plasticity, long-term changes rely on protein synthesis, synaptic remodeling and infrastructural changes in cell processes (e.g. terminal arbours or dendritic spines) that are mediated by calcium-dependent mechanisms. An important aspect of NMDA receptors, in the induction of long-term potentiation, is that they confer associativity on changes in connection strength. This is because their voltage-sensitivity allows calcium ions to enter the cell when, and only when, there is conjoint pre-synaptic release of glutamate and sufficient post- synaptic depolarization (i.e. the temporal association of pre- and post-synaptic events). Calcium entry renders the post-synaptic specialization eligible for future potentiation by promoting the formation of synaptic ‘tags’ and other calcium-dependent intracellular mechanisms. In summary, the anatomy and physiology of corticocortical connections suggest that forward connections are driving and commit cells to a prespecified response given the appropriate pattern of inputs. Backward connections, on the other hand, are less topographic and are in a position to modulate the responses of lower areas. Modulatory effects imply the postsynaptic response evoked by presynaptic input is modulated by, or interacts in a nonlinear way with, another input. This interaction depends on nonlinear synaptic or dendritic mechanisms. Finally, brain connections are not static but are changing at the synaptic level all the time.” –K. Friston [23]

 

“A key architectural principle of the brain is its hierarchical organization. This has been established most thoroughly in the visual system, where lower (primary) areas receive sensory input and higher areas adopt a multimodal or associational role. The neurobiological notion of a hierarchy rests upon the distinction between forward and backward connections. This distinction is based upon the specificity of cortical layers that are the predominant sources and origins of extrinsic connections (extrinsic connections couple remote cortical regions, whereas intrinsic connections are confined to the cortical sheet). Forward connections arise largely in superficial pyramidal cells, in supra-granular layers and terminate on spiny stellate cells of layer four in higher cortical areas. Conversely, backward connections arise largely from deep pyramidal cells in infra-granular layers and target cells in the infra and supra-granular layers of lower cortical areas. Intrinsic connections mediate lateral interactions between neurons that are a few millimetres away. There is a key functional asymmetry between forward and backward connections that renders backward connections more modulatory or nonlinear in their effects on neuronal responses. This is consistent with the deployment of voltage-sensitive NMDA receptors in the supra-granular layers that are targeted by backward connections. Typically, the synaptic dynamics of backward connections have slower time constants. This has led to the notion that forward connections are driving and illicit an obligatory response in higher levels, whereas backward connections have both driving and modulatory effects and operate over larger spatial and temporal scales. The hierarchical structure of the brain speaks to hierarchical models of sensory input.” –K. Friston [24]

 

“We have laid out the neurobiological and psychophysical motivation for the theoretical treatment [involving hierarchical dynamic models of the brain and dynamic expectation maximization methods that use free and fixed-form approximations to the posterior or conditional density, which now] calls for an enormous amount of empirical verification and hypothesis testing, not least to disambiguate among alternative theories and architectures. We have shown that the brain has evolved the necessary anatomical and physiological equipment to implement this inversion, given sensory data.” –K. Friston [24]

 

“Understanding the computational and information processing roles of cortical circuitry is one of the outstanding problems in neuroscience. There still remains a gap between our understanding of learning and inference in hierarchical Bayesian models and our understanding of how it is implemented in cortical circuits. In a recent review, Hegde and Felleman pointed out that the ‘Bayesian framework is not yet a neural model. [The Bayesian] framework currently helps explain the computations that underlie various brain functions, but not how the brain implements these computations.’ We propose to fill this gap by deriving a computational model for cortical circuits based on the mathematics of Bayesian belief propagation in the context of a particular Bayesian framework called Hierarchical Temporal Memory (HTM).” –D. George and J. Hawkins [25]

 

“Halfway through my thesis project, the American cognitive scientist Marvin Minsky published his famous treatise ‘The Society of Mind.’ Eagerly anticipating that the leading figure in the field of artificial intelligence had found the ultimate answer, I dove into the book, only to discover that what I sought could not be found there either. Minsky did not seem to care much about real brains, just about the higher-order operational processes that may take place inside them.” –M. Nicolelis [26]

 

“Global brain dynamics define only one component of the brain’s own point of view. Embedded in these internal dynamics are a heap of memories, accumulated throughout the animal’s previous life experience. This mnemonic information also contributes to the spatiotemporal collision of incoming peripheral signals and internal dynamic states…the animal’s mnemonic existence is felt even before the incoming sensory signal hits the forebrain, by dictating the generation of an anticipatory signal across the cortex and likely most of the forebrain. This electrical distortion, which likely includes components of the spatiotemporal motor signals created by the animal as it explores its environment, may account for the fact that neuronal activity is modulated across all layers of the rat S1 cortex, as well as the somatosensory thalamic nuclei, several hundred milliseconds prior to the moment when the animal touches any object with its whiskers. Under the influence of this “expectation” signal, which can either increase or decrease the firing rate of single neurons across a population, the internal brain state is pre-adjusted, creating the brain’s initial model of the external world.” –M. Nicolelis [26]

 

“Not only is there a maximum limit of firing that an ensemble of neurons can reach, but the global ensemble firing rate tends to stay constant, hovering around a mean, due to a variety of compensatory mechanisms that create a stable equilibrium. If single or multiple cortical neurons increase their firing rate instantaneously, an equivalent mirror image reduction in firing is soon produced by other members of the neural ensemble so that the overall energy budget of the brain stays constant over the long run…The mechanisms that maintain internal brain homeostasis, particularly energy consumption, may dictate the limits of complex information processing by the brain.” –M. Nicolelis [26]

 

“No one precise spatial location in the brain – at least in the subdivision called the diencephalon – from which the data were sampled defined the brain’s global internal state of dynamics. Ultimately, the wake-sleep cycle can be retrieved from any of the diencephalic structures. And by combining multiple brain structures together, the resolution of the final image increases significantly, appearing almost like a hologram, much as the distinguished neurosurgeon and neurophysiologist Karl Pribram, a student of Karl Lashley, imagined it a few decades ago.” –M. Nicolelis [26]

 

“Over the past half century, multiple laboratories have found evidence that the transitions between different states in the wake-sleep cycle are determined by the interplay of a series of modulator neurotransmitters. These chemicals, which include acetylcholine, noradrenaline, serotonin, dopamine, and, most likely, GABA, are produced by clusters of neurons located in a variety of subcortical structures. By means of their widespread axonal projections, these neurons deliver the modulator neurotransmitters throughout the brain. Most studies have so far focused on the role of these chemicals in determining a particular state of sleep or wakefulness, but I believe it is the collective action of these modulatory structures that triggers the shift from one brain state to another, leading to an equivalent change in animal behavior…In this way, [brain state-space maps are] a condensed depiction of global brain dynamics and how these dynamics are determined by the energy available to the brain. From moment to moment, a subject’s global brain dynamics varies as it responds to the neuromodulatory influences that ‘push’ the generation of continuous electrical activity among billions of interconnected neurons. The brain, however, can only move from one stable dynamic state to another equally stable state, assuming that the cortical circuits can together reach the necessary energy threshold to do so. This happens when the forebrain as a whole attains a high level of coherent and synchronous neuronal activity.” –M. Nicolelis [26]

 

“More proof that the brain does not respect the borders created by cortical localizationalists comes from repeated independent observations of cross-modal processing in primary sensory fields – in stark contrast to classical hierarchical doctrine which states that cross-modal processing should only take place in so-called higher-order associative areas in the cortex. In the mid-1990s, instances of cross-modal processing in the visual cortex began to be reported among patients suffering from definitive visual deficits (congenital or acquired blindness) or people submitted to temporal visual deprivation during experiments…positron emission tomography (PET) demonstrated that both the primary and secondary visual cortex were strongly activated in people who, after becoming blind in early life, had become proficient Braille readers, when they performed tasks that required fine tactile discrimination…transcranial magnetic stimulation (TMS) to “disrupt” activity in the V1 cortex while the blind person was presented with Braille letters or embossed Roman numerals to read [resulted] in the subjects committing significant discrimination errors – even though the task involved tactile information. In subjects with no visual impairment, TMS targeted at the V1 led to problems with visual recognition of the letters, but had no effect on the ability to discriminate between different tactile information. This suggested that the enhanced ability by blind individuals in tactile discrimination tasks, such as Braille reading, may emerge because the visual cortex is recruited to help out – what is called ‘cross-modal recruitment.’” –M. Nicolelis [26]

 

“…the cytoarchitectonic divisions of the brain perpetuated by Brodmann and generations of localizationalists, have claimed that rigid anatomical and functional birders exist within the cortex…such a functional model of the brain, devoid of time and internal dynamic brain states, served us well for a century. Lately it has become a major hindrance for progress in our thinking about how the cortex processes information…”–M. Nicolelis [26]

 

“Today we know that speech production depends heavily on the concurrent interaction of a multitude of cortical and subcortical brain regions. The reason cerebral strokes, like the one documented by Broca, produce aphasia is likely that they destroy, in addition to the gray matter, huge portions of underlying white matter, which contains dense packs of the nerve fibers that connect this huge network of areas with the frontal lobe. Such a massive destruction of key communication cables amounts to a catastrophic functional disconnect of this speech production network…Broca’s ghost and its never-ending haunting of distributionists can be put to rest.” –M. Nicolelis [26]

 

“The combined evidence for cross-modal processing and the effects of internal brain states mounts a fatal challenge to the idea that the cortex is rigidly divided into functionally specialized areas and that distinct cortical regions are purely unimodal.” –M. Nicolelis [26]

 

“I propose that the brain is more akin to a medium in which neuronal space and time fuse into a physiological space-time continuum, which can be recruited in a variety of ways to perform all the tasks assigned to it. Depending on the status of the peripheral sensory organs, the task demands, and the brain-state context in which behaviors have to be produced, this physiological space-time manifold can be dynamically twisted, bent, and shaped in an optimal processing configuration that, at any given moment, endows us with our best neuronal shot to achieve goal-oriented behaviors. This notion of a cortical neuronal space-time continuum is totally compatible with the existence of ripples of probabilistic regional functional specialization. Yet, in this new conception such ripples are neither absolute nor immutable for the duration of one’s life. Instead, they can shift quickly, according to the task at hand…Essentially, the cortex should cease to be treated as a hierarchical mosaic of discrete, segregated, highly specialized, and virtually autonomous cortical areas.” –M. Nicolelis [26]

 

“Unlike previous incarnations – notably Karl Lashley’s equipotentiality theory – my concept of a neuronal space-time continuum has no qualms in accepting that there is some degree of cortical specialization, dictated mainly by the general strategy through which the cortex and thalamocotical connections were laid down during early postnatal development. But development is not destiny, and populations of neurons can be recruited as needed once the initial cortical layout is set down. Those ontogenetic specializations, like a featured soloist, sit atop a powerful symphony of multimodal and dynamic cortical interactions that dictate how a brain works throughout its unique existence.” –M. Nicolelis [26]

 

“After twenty-five years of seeing, listening, and recording brain [dynamics], waves of cortical spikes do not appear to stop at, or care about, the aesthetically pleasing borders of old-fashioned cytoarchitectonics. Instead, they simply pass through them, as if those borders were mere fantasies created in someone else’s brain.” –M. Nicolelis [26]

 

“In the next two decades, brain-machine interfaces (BMIs), built by connecting large chunks of our brains through a bidirectional link, may be able to restore aspects of humanity to those who have succumbed to devastating neurological diseases. Possibly within a decade or two, BMIs will likely begin to restore neurological function to the millions of people who can no longer hear, see, touch, grasp, walk or talk by themselves.” –M. Nicolelis [26]

 

“The BMI-controlled exoskeleton will require a new generation of high-density microelectrode cubes that can be safely implanted in the human brain and provide reliable, long-term simultaneous recordings of the electrical activity of tens of thousands of neurons, distributed across multiple brain locations.” –M. Nicolelis [26]

 

“Based on our lab experiments with BMIs, I expect that after a few weeks of interaction, the patient’s brain will completely incorporate, via a process of experience-dependent plasticity, the entire exoskeleton as a true extension of the person’s body image. At that point, the patient will be able to use the BMI-controlled exoskeleton to move freely and autonomously around the world.” –M. Nicolelis [26]

 

“I fully expect that BMI research will help to elucidate how the neuronal space-time continuum forms and operates, in a tight and cohesive way, throughout the course of our lives. To some degree this issue pertains to the famous binding problem, a conundrum that has been haunting neuroscientists for quite some time now. By simply changing the frame of reference from incoming stimuli, generated by the outside world, to the vantage of the brain’s own point of view, the binding problem might disappear altogether, since in a “relativistic” brain, there is no need to bind to anything, because no incoming stimulus was broken into discrete sensory bits of information to begin with. In a relativistic brain there is simply a single dynamic model of the world that is continuously refreshed by the constant collisions between the brain’s internal dynamics and the matching and nonmatching information sensed by the body’s periphery.” –M. Nicolelis [26]

 

“[Resolving] the binding problem may offer a potentially viable truce for the intellectual war waged between the localizationalist and distributionist camps of cortical physiology…strict, discrete localization of functions in the cortex, as well as pure unimodal cortical representations, dominate the early development stages of the cortex, most likely because this is when the brain’s connectivity is consolidated and the central nervous system gingerly crafts its internal modes of reality. That gradual building up of the simulator and its models may account for the relatively long developmental periods of human childhood and adolescence. Indeed, this may explain why it takes several years for children to become capable of merging multimodal information describing the same object, such as sound that is associated with their native language to the image of a corresponding letter or number.” –M. Nicolelis [26]

 

“Given the importance of many of the assumptions that enter into graph descriptions and analyses, we need to gain a better understanding of the nature of brain connectivity. It turns out that there are many ways to define, measure, and represent connectivity in the nervous system. Thus, our next question must be this: What exactly are brain networks?...Comprehensive data on brain connectivity (the ‘connectome’) is essential to constrain such a model. If appropriately configured, a detailed ‘forward model’ of the human brain would allow predictions about patterns of endogenous brain dynamics, about the responsiveness of the ‘brain system’ to various exogenous stimuli, and about pathological changes in brain dynamics following damage or disease.” –O. Sporns [27]

 

“I distinguish three types of connectivity: structural connectivity of physical coupling, functional connectivity of statistical dependencies in neural dynamics, and effective connectivity of causal influences…Perhaps the most fundamental distinctions [are] between structural [anatomical] connectivity as a "wiring diagram" of physical links, functional connectivity as a web of ‘dynamic interactions,’ and effective connectivity which encompasses the network of directed interactions between neural elements, and attempts to go beyond structural and functional connectivity by identifying patterns of causal influence among neural elements. Structural networks are constructed from measures of physical association – for example, the number of stained or reconstructed axonal fibers that link two nodes in an anatomical partition. Functional networks are usually derived from symmetrical measures of statistical dependence such as cross-correlation, coherence, or mutual information. Effective networks can be defined on the basis of estimates for pairwise causal or directed interactions, obtained from time series analysis or from coefficients of models designed to infer causal patterns.” –O. Sporns [27]

 

“Virtually all modern cytoarchitectonic and receptor-labeling studies report significant intersubject and interhemispheric variability. This variability requires probabilistic mapping techniques to construct reliable anatomical reference maps…Recent studies have clearly confirmed the architectural heterogeneity of human cerebral cortex, and the impending arrival of comprehensive gene expression maps for the human brain will add an important new dimension. Comprehensive cytoarchitectonic, receptor density, and gene expression brain maps will yield multivariate data on cell densities, laminar patterning, receptor types, and protein levels. The conjunction of these different measures allows inferences about functional differentiation that are more precise than those relying on a single structural attribute.” –O. Sporns [27]

 

“Brain networks combine a strong tendency toward functional homeostasis, the maintenance of function despite persistent variations in structure, with the capacity to express variations in behavior. Functional homeostasis limits the phenotypic expression of variable neuroanatomy and is likely the result of coordinative network processes…Structurally variable but functionally equivalent networks are an example of degeneracy, defined as the capacity of systems to perform similar functions despite differences in the way they are configured and connected. Degeneracy is widespread among biological systems and can be found in molecular, cellular, and large-scale networks. Price and Friston have noted that human brain networks display degeneracy since different sets of brain regions can support a given cognitive function. Cortical activation maps obtained from functional neuroimaging studies of individuals often show only partial overlap for a given cognitive task, suggesting that different individuals utilize different (degenerate) networks. The loss of a subset of all regions that are reliably activated in a given task may not disrupt task performance, indicating that individual regions may not be necessary or that recovery processes following brain injury can configure structurally different but functionally equivalent networks. These examples of degeneracy in cognitive networks are suggestive of the idea that mechanisms promoting functional homeostasis may also operate at the scale of the whole brain to ensure that structural variations or disturbances do not lead to uncontrolled divergence of functional outcomes.” –O. Sporns [27]

 

“The fact that fine details of cellular anatomy are ‘specific’ (causally determined) rather than truly ‘random’ does not necessarily entail that a full description of the nervous system in structural terms must be framed at the level of the full-scale cellular, or even subcellular, anatomy. Homeostatic and coordinative processes within the nervous system ensure that variability at molecular or cellular scales generally does not perturb processes unfolding on larger scales. The modularity of the brain's architecture, a recurrent theme, effectively insulates functionally bound subsystems from spreading perturbations due to small fluctuations in structure or dynamics. Yet, while it is important to ensure that the loss of a single spine or the overexpression of a protein in a small number of synaptic sites does not result in alterations of global patterns of neuronal communication and connectivity, it is equally important that the neuronal architecture maintain variability and heterogeneity. Individual neurons, even those belonging to the same class, must remain different from one another to continually create dynamic variability as a substrate for adaptive change.” –O. Sporns [27]

 

“Network approaches to neuroanatomy move us closer to resolving the long-standing debate between localizationist and distributionist accounts of brain function. The key step is to view local specialization as the result of patterned distributed interactions that confer different functional attributes to individual network elements. Since these interactions can be accessed with network mapping tools, they also allow a quantitative data-driven assessment of functionality and do not require assumptions about of how brain regions participate in various cognitive processes.” –O. Sporns [27]

 

“In a 1993 commentary, Francis Crick and Ted Jones pointed to the lack of a connectional map of the human cortex, comparable to that compiled for the macaque monkey by David Van Essen and Dan Felleman, and they challenged the field that such a map was essential for human neuroscience. In their words, ‘it is intolerable that we do not have this information for the human brain. Without it there is little hope of understanding how our brains work except in the crudest way.’ Indeed, a comprehensive description of the structural network of the human brain is of fundamental importance in cognitive neuroscience. Together with Giulio Tononi and Rolf Kotter, I proposed the term ‘connectome’ for such a data set. We stated as our central motivating hypothesis ‘that the pattern of elements and connections as captured in the connectome places specific constraints on brain dynamics, and thus shapes the operations and processes of human cognition.’ The human brain's three-dimensional structure, its growth and development, individual variability, and the sheer number of components that it contains present challenges that far exceed those posed by the human genome. Since our proposal was first published, others have suggested that the primary focus of connectomics should be on individual neurons and their synaptic connections. To date, cellular techniques have not yet been applied to the comprehensive mapping of neural connectivity in large brains, and significant technical challenges regarding the reliability and sensitivity of these techniques remain to be addressed. Sebastian Seung suggested that it may not be necessary to acquire connectome data by ‘dense reconstruction’ of a single brain specimen. Rather, connectomic data could be assembled via a (much simpler) ‘sparse reconstruction’ approach – for example, by identifying and recording connected pairs of neurons.” –O. Sporns [27]

 

“Complete ultra­structural mapping of neural connectivity of entire nervous systems will require the development of a comprehensive methodological framework that parallelizes serial section electron microscopy (EM) imaging, volume assembly, and data analysis to allow large-scale high-throughput collection and testing of connectivity information. Future work will likely attempt the reconstruction of a single mouse cortical column, a task that will require the accurate mapping of synaptic connectivity on a scale that exceeds that of C. elegans by more than a million-fold. A unique feature of serial EM reconstruction is that it provides exquisite detail about the three-dimensional structure of neuronal and nonneuronal cells, which is important for understanding the biophysical properties of neural processes, spines, and synapses, as well as for neuron-glia interactions and models of brain tissue that take into account the spatial relations between cells…It is too early to tell which technique will ultimately provide the most feasible and reliable approach to mapping neural circuits at the cellular or subcellular level. Perhaps a combination of serial EM, single-cell, and Brainbow labeling will be needed to acquire useful data sets in ways that are both fast and accurate…While subcellular methods have provided tantalizing glimpses of neural wiring patterns, the complete mapping of, say, the full three-dimensional architecture of the approximately 80 million projection neurons in the mouse cortex still poses significant challenges in terms of resolution, tracing accuracy, and computational reconstruction. These challenges, while formidable, may well be overcome in the foreseeable future…While comprehensive maps of the cellular connectivity of a complex brain may still be years away, there are several established and proven empirical approaches for the construction of connectome data sets at the level of mesoscopic and macroscopic projections between cell groups and brain regions.” –O. Sporns [27]

 

“An important goal for the connectome is to deliver a description, that is, a compressed representation of the invariants of neural connectivity, the structural regularities of brain networks that are characteristic for a given neuronal cell type, circuit, or brain region in a given species…for subcellular maps of brain connectivity to achieve their full potential, sophisticated neuroinformatics tools and statistical approaches to neuroanatomy are essential. Once an integration of empirical circuit mapping and computational analysis is accomplished, it will provide us with an unprecedented view of cellular networks that will inform more realistic physiological and neurocomputational models.” –O. Sporns [27]

 

“Are structural brain networks single-scale or scale-free? Given the small size of most currently available connection data sets (in many cases comprising less than 100 nodes), the question is difficult to settle and may require the arrival of more highly resolved structural data sets. Because of the cost of adding connections in the brain, it seems unlikely that structural brain networks, including those at the large scale, can exhibit scale-free degree distributions across a wide range of degrees. Since all brain nodes, regardless of how they are defined, are spatially embedded, there must be strict upper limits on the number and density of connections that can be sustained at any given node, due to basic spatial and metabolic constraints. Other networks that are spatially embedded and where similar constraints on node degree apply, such as transportation networks, have been shown to exhibit exponential or exponentially truncated scale-free degree distributions. Even if structural brain networks will not turn out to be scale-free, the degree distributions analyzed so far all exhibit deviations from a simple Gaussian or exponential profile that is characteristic of random networks. For example, brain regions that maintain a large number of connections are generally more abundant than would be expected based on the assumption of random degree distributions.” –O. Sporns [27]

 

“The pronounced tendency for synapses to connect cells within local neighborhoods results not only in high clustering but also in an over­ abundance, relative to random architectures, of particular classes of structural motifs. The issue has implications for arguments about the evolutionary origin of network topology in general and the functional importance of specific enriched motif classes in particular. Why are motifs of potential interest in brain networks? Motifs represent different topological patterns of structural connections that link small subsets of nodes within a ‘local’ neighborhood (defined topologically, and not necessarily implying small metric distances between nodes). In principle, different motif classes could support different modes of information processing, and their distribution within a larger network could therefore be considered of adaptive value. Modeling studies have shown that the way in which small groups of units are structurally interconnected constrains their dynamic interactions. Different structural motifs facilitate specific classes of dynamic behavior-for example, periodic or chaotic behavior – or promote dynamic stability. Another way in which structural motifs contribute to neural function derives from the idea that more densely connected motifs contain a larger number of potential subcircuits (‘functional motifs’). A greater number of potential subcircuits allows greater diversity in the topology of functional and effective interactions that are expressed in the brain at any given time. Yet another functional aspect of motifs relates to synchronization. Different motif classes exhibit different capacities for synchronization in networks with conduction delays. The high proportion of dual dyad motifs in large-scale connectivity data sets has been linked to the capacity of such motifs to promote zero phase-lag synchrony across great spatial distances and hence long conduction delays in cortex. Taken together, these studies suggest that specific classes of neural motifs contribute to specific network functionalities. These studies appear to support the argument that certain motif classes may have been selected for in evolution because they confer adaptive value to the organism. However, nonrandom motif distributions may also have arisen as a result of selection pressure on other network components or processes – for example, the need to accommodate develop­ mental constraints or to conserve wiring and metabolic energy. In that sense, nonrandom motif distributions may be secondary features of network architectures, reflecting their rules of construction rather than their adaptive value. A recent re-examination of motifs in cellular networks cast doubt on their interpretation within an adaptive framework and traced their emergence to network construction rules such as duplication mechanisms. Viewed from this perspective, motifs may be phenotypic characteristics that are by-products of true adaptations, or ‘spandrels’ of complex network design. Because of their mutual dependence and partial redundancy, it is probably premature to attribute adaptive advantages to each and every nonrandom network attribute. The fundamentally intertwined nature of many of the network attributes discussed in this chapter (motifs, modules, hubs) makes it difficult to disentangle selective contributions made by one but not another attribute. The detection of a statistically significant network feature does not automatically imply that the feature has adaptive value. Simple random models, often employed in graph theoretical studies of networks, provide statistical validation but often make little biological sense as they fail to take into account biological rules of growth, spatial embedding, or metabolism. –O. Sporns [27]

 

“An interesting question concerns the cross-species comparison of network attributes – for example, those indicating the presence of a small-world network. (Small­ world architectures in the brain are implemented as networks of modules and hubs, and these architectural features have clear relevance for the functional organization of the brain.) Has the ‘small-worldness’ of the mammalian cortex increased over evolutionary time, or does it covary with brain size? Cross-species comparisons of small-world attributes are made difficult by the use of incompatible anatomical partitioning schemes and by a general lack of structural data for many species. The small-world architecture of neuronal networks, at the scale of cellular and large-scale systems, provides a structural substrate for several important aspects of the functional organization of the brain. The architecture promotes efficiency and economy, as well as diverse and complex network dynamics. Each of these functional aspects is of critical importance for the organism and its evolutionary survival, and it is important that small-world networks can promote all of them simultaneously. A brain network that is economically wired but not capable of rapid and flexible integration of information would be highly suboptimal, as would be an architecture that supports great computational power but utilizes an inordinate amount of space or energy.” –O. Sporns [27]

 

“One of the most robust features of corticocortical connectivity is the prevalence of short-range connections. This pattern prevails among individual cortical neurons as well as between segregated brain regions. Anatomical studies have demonstrated an exponential decrease of connection probability with increasing spatial separation between cortical neurons…[studies] suggest that cortical circuits are organized such that conduction delays are near-minimal and synapse numbers are near­maximal… Cortical folding contributes to conserving wiring length…the effects of folding far exceed wiring optimization… The mechanics of cortical folding may introduce variations in the way cortical tissue responds to or processes information…Intuitively, if wiring volume or length were the only factor according to which neural connectivity is optimized, then the existence and, in many cases, evolutionary elaboration of long-range projections between distant cortical regions is hard to explain…optimal wiring length alone cannot account for the observed wiring patterns – instead, the topology of structural brain connectivity appears to be shaped by several different factors, including wiring as well as path length. Thus, a cortical architecture with short path length (or high efficiency) may confer a selective advantage to the organism. A drive toward maintaining short path length may partly explain the appearance and expansion of long-range fiber pathways in evolution. One such pathway, the arcuate fasciculus, is a prominent fiber tract in the human brain and links cortical regions in the temporal and lateral frontal cortex involved in language. Rilling et al. (2008) compared the anatomy of this tract in postmortem brains of several primate species, imaged with diffusion MRI. The tract is significantly smaller and differently organized in the cortex of nonhuman primates compared to the cortex of humans. Rilling et al. suggested that the elaboration and modification of the arcuate fasciculus, together with the increased differentiation of connected cortical regions, represents a structural substrate for the evolution of human language. The selective evolutionary expansion of the arcuate fasciculus is interpreted as evidence against the notion that language arose as an incidental by-product of brain-size enlargement. Viewed from the perspective of network topology, selective pressure on maintaining functional integration and efficient information flow in a larger brain may also have contributed to the evolutionary expansion of the arcuate fasciculus. This expansion led to the emergence of a new structural network that became available for functional recruitment by communication and language.” –O. Sporns [27]

 

“Small-world topology is closely associated with high global and local efficiency, often achieved with sparse connectivity at low connection cost. Neuronal synchrony is thought to play an important role in information flow and system-wide coordinative processes. The two main cellular components of mammalian cortex, excitatory principal cells and inhibitory interneurons, jointly account for much of the computational capacity of the network and its ability to form synchronized assemblies…this computational capacity is enhanced by the great morphological and physiological diversity of cortical interneurons. This diversity of network elements counteracts opposing demands on the size and connection density of the network, thus achieving a compromise between computational needs and wiring economy. Computational models show that long-range connections are crucial for producing network-wide synchronization, but their addition to the network increases the wiring cost. An efficiency function that trades off increases in synchronization with increases in wiring defines an optimal range within which global synchrony can be achieved with the addition of a modest number of long-range connections. Within this optimal range, the network exhibits a small-world architecture characterized by high clustering and short path length.” –O. Sporns [27]

 

“Robustness and evolvability are supported by the modular organization of biological systems, found everywhere from gene and protein networks to complex processes of embryonic development. Modularity promotes robustness by isolating the effects of local mutations or perturbations and thus allowing modules to evolve somewhat independently. Networks of dependencies between system elements reduce the dimensionality of the global phenotypic space and effectively uncouple clusters of highly interacting elements from each other. Modularity itself should therefore offer an evolutionary advantage and thus affect evolvability. The mechanisms by which the modularity of biological systems may have arisen are a matter of much debate.. Modularity may have evolved along two routes, by integration of smaller elements into larger clusters or by parcellation of larger systems into segregated smaller ones. The dissociability (or ‘near decomposability’) of biological systems extends to the brain's small-world architecture. Whether the modular organization of the brain has supported its evolvability is unknown and would depend in part on whether phenotypic characteristics of individual modules, or regions within modules, are shown to be under the control of locally expressed genes.” –O. Sporns [27]

 

“In addition to allowing efficient processing and conferring a degree of robustness in evolution, brain network modularity has a deep impact on the relation of network structure to network dynamics, a topic we will more thoroughly explore in coming chapters. Among these dynamic effects of modularity is a tendency toward increased dynamic stability.” –O. Sporns [27]

 

“Available data on scaling relations between neuron number and density, brain size, and relative proportions of gray and white matter support the notion that brains maintain absolute connectivity as their sizes change. As mammalian brain size increases over four orders of magnitude from mouse to elephant, neuronal density decreases by a factor of 10, which indicates that the increase in brain size is associated with an increase in the total number of neurons. Neocortical gray matter and white matter exhibit an allometric relationship but do not scale with an exponent close to 2 as would be expected if proportional connectivity were maintained. Instead, white matter only increases with an exponent of =4/3, much closer to the expected value for absolute connectivity. Zhang and Sejnowski (2000) have argued that this empirical power law can be explained as a necessary consequence of the basic uniformity of the neocortex and the need to conserve wiring volume… Evolutionary changes in the absolute size of the brain, including the neocortex, thus result in progressively less dense connectivity and increased modularity. Stated differently, sparse connectivity and modularity are inevitable outcomes of increases in brain size. Brain architecture cannot sustain boundless increases in size, as long conduction delays soon begin to offset any computational gains achieved by greater numbers of neurons. Larger brains are also naturally driven toward greater functional specialization as it becomes necessary to limit most connectivity to local communities while ensuring their global functional integration.” –O. Sporns [27]

 

“Brain structure, including the topology of brain networks, is part of an organism's phenotype…wiring patterns are partly controlled by physical forces such as axonal tension that leads to the prominent folding pattern of the cerebral cortex. This raises the question of whether some of what we see in the wiring patterns of structural brain networks is the result of physical forces rather than the outcome of natural selection. The realization that not every observable phenotypic trait is the result of adaptation has led to sharp disagreements among evolutionary theorists. This ongoing controversy suggests that any characterization of complex brain networks as ‘optimally adapted’ or ‘maximally efficient’ should be viewed with an abundance of caution. Optimal design is incompatible with the fact that evolutionary mechanisms cannot anticipate functional outcomes before they are realized as part of a living form and then become subject to variation and selection. It is therefore problematic to argue that observations about the structural economy or functional efficiency of extant brain networks are the outcome of a process of optimal design. This mode of explaining brain network topology in terms of a final cause (efficiency, optimality) is reminiscent of teleology, an idea that has had a difficult time in the history of biology.” –O. Sporns [27]

 

“Currently existing animal forms occupy only part of a large phenotypic space of possible forms, most of which have not and will not be realized. Extending this argument, currently existing nervous systems only occupy a small subspace within the much larger space of all possible, physically realizable, phenotypic arrangements of cells and connections. Given the vast number of combinatorial possibilities, it seems likely that there are regions of phenotypic space with brain connectivity that is more economical and more efficient than the connectivity of all extant species, including humans. These regions may have been missed by historical accident, or they may be unreachable because these brains cannot be built with the available toolkit of developmental biology-we just cannot ‘get there from here.’ Developmental processes are crucial for determining which regions of phenotypic space can be accessed given the genetic makeup of an organism and its lineage.” –O. Sporns [27]

 

“Much of the interest in theoretical neuroscience has focused on stimulus-driven or task-related computation, and considerably less attention has been given to the brain as a dynamic, spontaneously active, and recurrently connected system.” –O. Sporns [27]

 

Even cursory examination of structural brain connectivity reveals that the basic plan is incompatible with a model based on predominantly feedforward processing within a uniquely specified serial hierarchy. Whether considering individual neurons or entire brain regions, one finds that the vast majority of the structural connections that are made and received among network elements cannot be definitively associated with either input or output. Rather, they connect nodes in complex and often recurrent patterns (Lorente de No's ‘synaptic chains’). Even in regions of the brain such as primary visual cortex that are classified as ‘sensory,’ most synapses received by pyramidal neurons arrive from other cortical neurons and only a small percentage (5 percent to 20 percent) can be attributed to sensory input. Cortical areas that are farther removed from direct sensory input are coupled to one another via numerous mono- and polysynaptic reciprocal pathways. This prevalence of recurrent anatomical connections suggests that models which focus exclusively on feedforward processing in a silent brain are likely to capture only one aspect of the anatomical and physiological reality. Recurrent or reentrant processes make an important contribution to the shaping of brain responses and to the creation of coordinated global states. This coordination is essential for the efficient integration of multiple sources of information and the generation of coherent behavioral responses. In addition to recurrent processing induced by external perturbations, anatomical recurrence also facilitates the emergence of endogenous, spontaneous dynamics. These dynamics are more accurately captured as series of transitions between marginally stable attractors, as sequences of dynamic transients rather than stable states.” –O. Sporns [27]

 

“The observation and modeling of endogenous or spontaneous brain activity provide a unique window on patterns of self-organized brain dynamics – an intrinsic mode of neural processing that may have a central role in cognition.” –O. Sporns [27]

 

“In nearly all instances where it has been empirically observed, spontaneous neuronal firing exhibits characteristic spatiotemporal structure. Spontaneous neural activity therefore is not stochastic ‘noise’ but rather is organized into precise patterns. For example, numerous studies have shown that populations of cortical neurons coordinate their spontaneous activity, presumably via their anatomical interconnections, and exhibit characteristic correlation patterns. Neurons in mouse visual cortex are found to be spontaneously active and show synchronization as well as repeating patterns of sequential activation within distinct cellular net­ works. Pharmacological blocking of excitatory neurotransmission abolishes network synchronization, while some neurons maintain their ability to engage in spontaneous firing. This suggests that spontaneous cortical activity is shaped by two components, the intrinsic electrical properties of ‘autonomous’ neurons and the spreading and synchronization of neural activity via excitatory connections. The important role of recurrent connectivity in shaping spontaneous as well as evoked cortical responses has since been confirmed in additional studies. For example, it has been found that thalamic input triggered patterns of cortical response that were strikingly similar to those seen during spontaneous cortical activity, suggesting that the role of sensory input is to ‘awaken’ cortex rather than impose specific firing patterns. This observation has far-reaching implications for models of cortical information processing.” –O. Sporns [27]

 

“What all [the] observations have in common is that they reveal cortex as spontaneously active, with ongoing fluctuations that exhibit characteristic spatiotemporal patterns shaped by recurrent structural connectivity. The complex dynamics and rich patterning of spontaneous network activity at the cellular scale is a remarkable example of how anatomy and cellular physiology can combine to generate a set of dynamic network states in the absence of external input or stimulus-evoked cognitive processing. Sensory inputs ‘awaken’ or modulate intrinsic cortical dynamics rather than instruct central brain activity or transfer specific information that is then processed in a feedforward manner. Many open questions remain. The effect of extrinsic inputs on intrinsic network states is still incompletely understood, and several current studies suggest a nonlinear interaction, in particular in relation to UP or DOWN states, rather than linear superposition. So far, most of the dynamic structure of ongoing neural activity has been demonstrated within local patches of cortex – how much additional structure exists between cells separated by greater distances or located in different cortical regions is still unknown. The anatomical and physiological factors that govern the slow temporal dynamics of coordinated transitions between UP and DOWN states in cortical neurons require further study. The topology of cellular cortical networks remains largely uncharted as network analysis techniques have yet to be widely applied in this experimental domain. How UP/DOWN states relate to fluctuations of neural activity measures in EEG/MEG or fMRI is yet to be determined. Finally, the possible relationship of spontaneous cortical activity with sequences of cognitive or mental states of the organism urgently awaits further empirical investigation.” –O. Sporns [27]

 

“If structural and functional connectivity are indeed related, we might expect to see correspondences between their network topology and architecture. Modularity and hubs are consistently found within the large-scale organization of mammalian cortical anatomy. Does the topology of functional networks derived from observed brain dynamics mirror the topology of the underlying anatomy? Over the past decade, numerous studies of functional brain connectivity have indeed demonstrated that functional interactions within large-scale structural networks exhibit characteristic patterns that resemble those seen in the anatomy.” –O. Sporns [27]

 

“Does endogenous network activity have a functional role in the brain? Do these dynamic patterns contribute to cognitive and behavioral responses, or are they nothing but "physiological noise" without function? Despite the long history of spontaneous neural activity in electrophysiology, tracing back to the 1920s, the cognitive role of such activity remains very much a matter of debate and controversy. The functional meaning of the brain's default mode has been questioned. Some authors have pointed to nonneuronal components in resting brain fluctuations. Others have criticized the significance of endogenous brain activity, a point that stems from the idea still prevalent within cognitive science that most of human cognition is about computing purposeful responses to specific processing demands posed by the environment. At the time of this writing, the neuronal origin of default mode or resting brain activity appears firmly established, and the reappraisal of the role of intrinsic brain activity in perception and cognition has ushered in a paradigm shift in brain imaging.” –O. Sporns [27]

 

“William James' skepticism regarding the relation of cognition to the anatomy of the human brain may strike many of us as old-fashioned. After all, modern neuroscience continues to yield a plethora of empirical data that reveal the neural basis of cognition in ever greater detail, and the "physiology of the future" must surely have arrived by now. And yet, the relationship between brain and cognition is still only poorly understood. Great progress notwithstanding, neuroscience still cannot answer the "big questions" about mind and intelligence. Consequently, most cognitive scientists continue to hold the position that intelligence is fundamentally the work of symbolic processing, carried out in rule­ based computational architectures whose function can be formally described in ways that are entirely independent of their physical realization. If cognition is largely symbolic in nature, then its neural substrate is little more than an inconsequential detail, revealing nothing that is of essence about the mind. Naturally, there is much controversy on the subject. The idea that mental life can be explained as a set of computational processes has undeniable power and appeal. Yet, the nature of these processes must in some way depend on the biological substrate of brain and body and on their development and natural history. There have been many false starts in the attempt to link brain and cognition. One such failure is neuroreductionism, a view that fully substitutes all mental phenomena by neural mechanisms, summarized in the catchphrase "You are nothing but a pack of neurons," or, put more eloquently, '' 'You', your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules" (Crick, 1994). The problematic nature of this statement lies not in the materialist stance that rightfully puts mental states on a physical basis but rather in the phrase "no more than," which implies that the elementary properties of cells and molecules can explain all there is to know about mind and cognition. Reductionism can be spectacularly successful when it traces complex phenomena to their root cause, and yet it consistently falls short as a theoretical framework for the operation of complex systems because it cannot explain their emergent and collective properties. [SL Note: I don’t agree with this view. Reductionism as Edelman and Sporns have defined it is actually the addition of complication but in such a way as to move farther from representing the complex dynamics of biological systems (e.g., quantum mechanical or other as yet unknown physical approaches advocated by Penrose, Kurzweil, et al., or the intractable models proposed by connectionists). The advantage in looking for all that emerges from the collective, dynamic classical behavior of biological systems is that it may lead to more tractable solutions that those proposed by so-called reductionists, and include such properties as group selection in constraining learning and reasoning and degeneracy in providing inherent fault tolerance. Why is this view ignored, and why is it not the real ‘reductionist’ approach? If my first essay (HERE) espouses any one viewpoint, it is this one.]” –O. Sporns [27]

 

Marcel Mesulam proposed that the physical substrate of cognition is a set of distinct large-scale neurocognitive networks that support different domains of cognitive and behavioral function. He conceptualized brain-behavior relationships as both localized and distributed, mapping complex cognition and behavior to a "multifocal neural system" rather than a circumscribed set of specialized anatomical regions. He noted the absence of simple one-to-one correspondences between anatomical regions and cognitive functions and instead argued that specific domains of cognition or behavior are associated with networks of regions, each of which individually may support a broad range of functions. Mesulam envisioned sensory processing to unfold along a "core synaptic hierarchy" consisting of primary sensory, upstream unimodal, downstream unimodal, heteromodal, paralimbic, and limbic zones of the cerebral cortex. The last three subdivisions together constitute transmodal areas that bind signals across all levels and form integrated and distributed representations. Crosscutting this hierarchical scheme, Mesulam distinguished five large-scale neurocognitive networks, each concerned with functions in a specific cognitive domain: spatial awareness, language, explicit memory/emotion, face/ object recognition, and working memory/executive function. These networks do not operate in isolation, instead they engage in complex inter­ actions partly coordinated by transmodal areas.

Steven Bressler put the notion of distinct neurocognitive networks in a more dynamic context when he defined a complex function of the brain as "a system of interrelated processes directed toward the performance of a particular task, that is implemented neurally by a complementary system, or network, of functionally related cortical areas". According to this view, the structural networks of the cerebral cortex, or the entire brain, serve as a substrate for the system-wide dynamic coordination of distributed neural resources. An implication of this definition is that different complex functions are accomplished by transient assemblies of network elements in varying conditions of input or task set. In other words, different processing demands and task domains are associated with the dynamic reconfiguration of functional or effective brain networks. The same set of network elements can participate in multiple cognitive functions by rapid reconfigurations of network links or functional connections.

The multifunctional nature of the brain's network nodes leads to the idea that functions do not reside in individual brain regions but are accomplished by network interactions that rapidly reconfigure, resulting in dynamic changes of neural context (cf. McIntosh). Regional activation is an insufficient indicator of the involvement of a given brain area in a task, since the same pattern of regional activations can be brought about by multiple distinct patterns of dynamic relationships. Randy McIntosh suggested that the functional contribution of a brain region is more clearly defined by the neural context within which it is embedded. This neural context is reconfigured as stimulus and task conditions vary, and it is ultimately constrained by the underlying structural network. Comparison of regional activation patterns in a variety of cognitive paradigms strongly suggests that a given brain region can take on more than one functional role depending on the pattern of interactions with other regions in the brain. McIntosh hypothesized that a special class of network nodes is instrumental in fast and dynamic reconfigurations of large-scale networks-for example, during task switching. These so-called "catalysts" facilitate the transition between large-scale functional patterns associated with cognitive processing. Catalysts may be identifiable on the basis of their embedding in structural or functional networks.

Network theories of cognition place an emphasis on cooperative processes that are shaped by anatomical connectivity. The mapping between neurons and cognition relies less on what individual nodes can do and more on the topology of their connectivity. Rather than explain cognition through intrinsic computational capacities of localized regions or serial processing within precisely specified or learned connections, network approaches to cognition aim for defining relationships between mental states and dynamic neural patterns of spontaneous activity or evoked responses. One of the most important features of these large­scale system dynamics is the coexistence of opposing tendencies toward functional segregation and integration.–O. Sporns [27]

 

Segregation and integration are two major organizational principles of the cerebral cortex and are invoked in almost all cognitive domains. This dichotomy results from the need to reconcile the existence of discrete anatomical units and regional specialization with the phenomenological unity of mental states and behavior. For example, the construction of a perceptually coherent visual image requires both segregation and integration. It requires the activation of cells with specialized receptive field properties, as well as the "unification" of multiple such signals distributed around the brain. This unification or "binding together" of object attributes has to be carried out quickly and reliably and on a virtually infinite set of objects that form part of a cluttered and dynamic visual scene. This so-called "binding problem" represents just one example of the general need to rapidly and efficiently integrate specialized and distributed information.

Evidence for anatomical and functional segregation comes from multiple levels in the brain, ranging from specialized neurons to neuronal populations and cortical areas. For example, maps of cortical regions have provided increasingly refined network diagrams of multiple anatomically and functionally distinct areas of the primate visual cortex. These specialized and segregated brain regions contain neurons that selectively respond to specific input features (such as orientation, spatial frequency, or color) or conjunctions of features (such as objects or faces). Segregation can be defined in a purely statistical context as the tendency of different neurons to capture different regularities present in their inputs. The concepts of functional localization and segregation are therefore somewhat distinct from one another. Segregation implies that neural responses are statistically distinct from one another and thus represent specialized information, but it does not imply that segregated neural populations or brain regions become functionally encapsulated or autonomously carry out distinct mental faculties. Furthermore, segregation is a multiscale phenomenon, found not only among cortical areas but also among local populations of neurons or single cells. Structural connectivity supports functional segregation. For example, some intraregional anatomical connections are arranged in patches or clusters that link populations with similar responses, thus preserving segregation.

Most complex cognitive processes require the functional integration of widely distributed resources for coherent behavioral responses and mental states. There are at least two ways by which neuronal architectures can achieve functional integration in the brain, convergence and phase synchrony. Integration by convergence creates more specialized neurons or brain regions by conjunction of inputs from other less specialized neurons. Convergence can thus generate neurons whose activity encodes high-level attributes of their respective input space, increasing the functional segregation and specialization of the architecture. There is abundant evidence that the convergence of neural connectivity within hierarchically arranged regions can yield increasingly specialized neural responses, including neurons that show selective modulations of firing rate to highly complex sensory stimuli. It should be noted that these localized responses depend on widely distributed network processes, including feedforward and feedback influences.

Network interactions endow even simple "feature detectors," for example, cells in primary visual cortex, with extremely rich response properties that are particularly evident when these responses are recorded during natural vision. These complex response properties reflect contextual influences from outside of the cell's classical receptive field that subtly modulate its neural activity. Thus, network interactions contribute to complex and localized neuronal response properties encountered throughout the brain.

Integration by convergence is also found within large-scale neurocognitive networks. Mesulam suggested that a special set of "transmodal nodes" plays a crucial role in functional integration. These regions bind together multiple signals from unimodal areas and create multimodal representations. Graphically, they serve as articulation points between networks supporting different cognitive domains. A somewhat different idea was proposed by Antonio Damasio, starting from the premise that the integration of multiple aspects of external and internal reality depends on the phase-locked coactivation of neural pat­ terns in distinct and spatially remote areas of cortex. This integration is supported by "convergence zones" that can trigger and synchronize distributed neural patterns through feedback projections but are not themselves the locus of integration or encoders of integrated mental content. Convergence zones are thought to occur throughout the forebrain, and their distinguishing feature is their mode of connectivity that supports binding and integration. In this sense, convergence zones are reminiscent of hubs placed throughout the neurocognitive skeleton, whose coordinating activity ensures distributed functional integration but that do not represent the endpoint of integration in a serial processing architecture. Damasio's model effectively combines aspects of convergence and distributed interactions, and it is supported by a broad range of physiological studies.

A rich set of models suggests that functional integration can be achieved even without convergence, through dynamic interactions, for example, resulting in phase locking or synchronization between distant cell populations. This mechanism depends on reciprocal structural connections linking neurons across segregated brain regions. This alternative model has been most fully explored in the context of the binding problem in vision. The visual binding problem arises because the different attributes of visual objects are analyzed in a large number of segregated brain regions and yet must be perceptually integrated.

The integrative role of phase synchrony in perception and cognition has been explored in a large number of computational models. Temporal correlations between distributed neural signals (functional or effective connectivity) can express relations that are essential for neural encoding of objects, figure-ground segregation, and perceptual grouping. Anatomically based computational models demonstrated that fast synchronization and cooperative interactions within and among segregated areas of the visual cortex can effectively solve the binding problem and enable coherent behavioral responses (Tononi et al., 1992). While the role of phase synchrony in visual perception continues to be a subject of much debate, network oscillations are now considered to be a common and prominent feature of neuronal activity with putative functional roles that range from representing relational information to regulating patterns of information flow and supporting information retrieval.

In summary, the coexistence of segregation and integration is indispensible for the proper functioning of large-scale neurocognitive networks. All coherent perceptual and cognitive states require the functional integration of very large numbers of neurons within the distributed system of the cerebral cortex. It is likely that both mechanisms for integration, convergence and synchrony, make important contributions. The capacity of the network to sustain high levels of both segregation and integration is crucial for its efficiency in cognition and behavior, and in an information-theoretic context it forms the origin of brain complexity.” –O. Sporns [27]

 

“An important concept in the architecture of neurocognitive networks is that of a processing hierarchy, an arrangement of neural units and brain regions where information flows from lower (sensory) to higher (multi­ modal and associative) levels and is gradually elaborated from simple to more complex responses. Many cognitive accounts of brain function are built on the notion that sensory information is sequentially processed on several different levels, mostly in a feedforward manner. According to these accounts, sensory inputs trigger sequences of discrete representations, constructed from neurons with increasingly complex response properties. Physiological recordings of individual neurons in the central visual system initially supported the idea that vision was carried out in a mostly serial hierarchy. However, the prevalence of reciprocal anatomical connections throughout the cerebral cortex soon cast doubt on the strictly serial nature of hierarchical processing and triggered efforts to extract stages of the cortical hierarchy from data on interregional anatomical connectivity (Felleman and van Essen, 1991). Based mostly on laminar termination patterns of axonal pathways, Felleman and Van Essen proposed a hierarchical scheme for the macaque visual cortex consisting of around ten separate levels linked by feedforward, lateral, and feedback connections. The scheme reconciled information on hundreds of interregional pathways and included connections that spanned single or multiple levels in either direction. Similar hierarchical schemes could be constructed for other sensory systems in the macaque monkey and the cat. Complementing the hierarchical arrangement of areas, Van Essen and colleagues described segregated streams that were arranged in parallel and relayed different types of visual information, most notably the dorsal and ventral visual cortex.

Building on the data set assembled by Felleman and Van Essen, others searched for an optimal hierarchical arrangement that contained minimal violations of the set of anatomical constraints imposed by laminar termination patterns. A large number of hierarchical orderings were found that contained an equally minimal number of constraint violations, suggesting that a unique optimal solution for the visual hierarchy did not exist. Consistent hierarchical ordering emerged mostly at lower levels of the architecture, with primary and secondary visual cortex always placed at the very bottom, while arrangements of higher visual areas exhibited much greater ambiguity. Recently, a more refined automated optimization approach which used a continuous metric for the assignment of hierarchical levels resolved some of the remaining inconsistencies and confirmed many of the features of the visual hierarchy as originally proposed. Thus, anatomical data support the idea of a hierarchical ordering of visual regions, not in the form of a strict serial sequence but with some overlap in the definition of hierarchical levels.

These anatomical studies do not take into account physiological effects or functional interactions. In fact, the relation of the anatomical hierarchy to visual function is far from simple. Some physiological properties of visual areas accurately reflect their position in the anatomical hierarchy, such as receptive field sizes, complexity of response tuning, or onset latency of response. However, when one is probing visual responses in different areas with a uniform set of tests-for example, for shape selectivity-areas placed at distinct levels display overlapping tuning characteristics that violate the serial nature of the hierarchy. The notion of serial hierarchies and fully segregated functional streams is further undermined by mounting empirical evi­ dence for cross- and multisensory processing even in "lower" and thus presumably unisensory cortical regions (Ghazanfar and Schroeder, 2006). For example, neurons in what is generally considered unimodal visual cortex often have both visual and auditory receptive fields. Standard models of the cortical hierarchy predict that such multisensory response properties appear only at later stages of processing, as a result of multimodal convergence. However, multisensory influences are pervasive at all levels and form an integral part of both simple and complex sensory processing. Recurrent feedback from "higher" to "lower" visual areas, thalamocortical interactions, and multisensory integration during natural vision all contribute to a loosening of the strictly serial hierarchical order.

Feedforward and feedback connections have different physiological and computational roles. Forward connections drive neural activity at short latencies, while feedback connections mediate a broad range of modulatory synaptic effects. The distinct dynamic effects of feedback can be quantified with modeling and time series analysis tools applied to electrophysiological or neuroimaging data. Dynamic causal modeling shows that long latency stimulus-evoked potentials are due to recurrent dynamics meditated by feedback connections. Granger causality analysis of fMRI data sets reveals top-down control signals sent from frontal and parietal cortex to visual cortex during an attention-demanding spatial task. The specific contributions of feedforward and feedback connections in stimulus- and task-evoked neural dynamics can be further assessed with models that extract effective connectivity.

In the visual system, several authors have suggested that top-down (feedback) connections may provide predictions about bottom-up sensory inputs and thus support visual recognition and categorization. The role of expectation in the visual process has also been explored. Hierarchical models are central to a theoretical framework proposed by Karl Friston (Friston, 2005b; 2010). A major tenet of the theory is that the main computational problem for the sensory brain is the inference of the causes that underlie its inputs. A central role for inference in cortical processing makes predictions about the arrangement of cortical connectivity. An architecture supporting the generation of dynamic predictions and causal inference should consist of hierarchical levels that are reciprocally interconnected, with both driving and modulatory connections. Prediction and inference must occur on multiple time scales since most natural environments exhibit rich temporal structure. Kiebel et al. (2008) have proposed that the brain represents causal changes unfolding at different time scales within different levels of the cortical hierarchy, with fast environmental processes primarily involving lower levels. Empirical studies have indeed provided evidence for a cortical hierarchy of temporal receptive windows in the human brain.

These models and observations all suggest that recurrent processing plays an important role in hierarchical accounts of the brain and that it is compatible with hierarchical ordering of the anatomical organization. The prominence of recurrent connectivity also implies that each hierarchical level may have only limited functional autonomy and that feed­ forward and feedback projections are always concurrently engaged. Gerald Edelman proposed that the actions of feedforward and feedback connections should be considered as part of a unified dynamic process, called reentry, that recursively links neural populations within the thalamocortical system. Models have demonstrated that reentry can support a broad range of functions from conflict resolution and the construction of new response properties intra- and interregional synchronization, feature binding, and perceptual grouping. Reentrant dynamics select and unify distributed resources while at the same time relaying contextual influences that modulate local responses as appropriate under a given set of environmental conditions. A reentrant system operates less as a hierarchy and more as a heterarchy, where super- and subordinate levels are indistinct, most interactions are circular, and control is decentralized. [SL Note: A system displaying this kind of reentrant structure is not completely decentralized, especially if coupled to a value-control neuromodulatory system, as is likely the case.]–O. Sporns [27]

 

“The sheer volume and complexity of brain imaging data demand the use of computational and modeling approaches to test hypotheses about the nature and neural origin of observed patterns of brain dynamics. At least two different types of approaches to modeling large-scale human brain data sets can be distinguished. One approach attempts to create large computational models that allow the user to explain and predict empirical neural response patterns. Another approach is to use modeling techniques to infer causes of observed neural responses and thus test specific hypotheses about their neural origin. Both approaches have made significant contributions to our understanding of neurocognitive networks.

The construction of large-scale models constrained by anatomy and physiology mainly aims at capturing empirically observed neural activations and time series. This "large-scale neural modeling" allows the simulation of neural responses at multiple time scales and across multiple levels of spatial resolution. These types of models typically involve the simulation of several interconnected brain regions, and their elementary neural units may be spiking neurons or larger neural populations (neural fields or masses). Models can be stimulated in ways that replicate experimental conditions or tasks, or their spontaneous activity can be sampled and analyzed. Their neural time series can be fully recorded and analyzed, and the modeling environment allows for manipulations such as lesions, anatomical rewiring, or changes to local bio­ physical properties of neurons and connections that would be difficult if not impossible to carry out empirically. Large-scale neural models can even be interfaced with robotic hardware to simulate the interactions between neural states, environment, and behavior. Computational models of neurocognitive networks differ in their implementation and design, but they have a common goal of revealing neural processes in complex brain networks responding to changing environmental and task demands. Each modeling approach faces significant challenges. For example, even the most comprehensive "synthetic" large-scale neural models inevitably contain only a fraction of the details present in a complete nervous system or organism. Their design thus requires careful selection of relevant anatomical and physiological parameters. In fact, models that replicate the structure and dynamics of every neuron and synapse in a complex nervous system, if feasible at all, may well turn out to be as incomprehensible and unmanageable as the real brain. Modeling necessarily involves a reduction of the complexity of the real system to reveal principles of organization. Important constraints for such reduced models will likely be provided by data-driven models of causal neural dynamics.” –O. Sporns [27]

Outlining several themes on network basis of cognition: First, cognition has an anatomical substrate. All cognitive processes occur within anatomical networks, and the topology of these networks imposes powerful constraints on cognitive architectures. The small-world attributes of large-scale structural and functional networks, as well as their hierarchical and modular arrangement, naturally promote functional segregation and integration across the brain. Much of cognitive processing can be characterized in terms of dynamic integration of distributed (segregated) resources. Second, integration involves dynamic coordination (synchrony, coherence, linear and nonlinear coupling) as well as convergence. Recurrent connectivity enables system­ wide patterns of functional connectivity, while highly central network nodes play specialized roles in coordinating information flow. These hub nodes are invoked in the context of association, transmodal processing, or dynamic convergence. Third, stimuli and cognitive tasks act as perturbations of existing network dynamics. Patterns of functional connectivity due to spontaneous neural activity are reconfigured in response to changes in sensory input or environmental demands.

Viewed from a network perspective, cognition is nothing more (and nothing less) than a special kind of pattern formation, the interplay of functional segregation and integration and the continual emergence of dynamic structures that are molded by connectivity and subtly modified by external input and internal state. The shape of cognition, the nature of the information that can be brought together and transformed, is determined by the architecture of brain networks. The flow of cognition is a result of transient and multiscale neural dynamics, of sequences of dynamic events that unfold across time. The variety of cognition, the seemingly endless diversity of mental states and subjective experiences, reflects the diversity and differentiation made possible by the complexity of the brain.

The network perspective differs radically from serial, representational, and symbolic accounts of cognition. Perhaps network thinking will eventually allow us to move beyond neural reductionism and cognitive functionalism and formulate a theoretical framework for cognition that is firmly grounded in the biology of the brain.” –O. Sporns [27]

 

“In molecular systems biology, researchers are beginning to establish links between patterns of failure in biological networks (e.g., protein­ protein interaction networks or genetic regulatory networks), on the one hand, and neurodegenerative disorders and various forms of cancer on the other hand. The deletion of hub proteins is more disruptive than the deletion of peripheral proteins. The topological embedding of proteins may thus be at least partly predictive of the functional impact of their inactivation, deletion, or mutation. These considerations have, in some cases, led to clinical applications: for example, protein subnetworks extracted from proteomics databases are more reliable and more accurate markers of metastatic tumors, compared to individual proteins. The quantitative analysis of failure modes in biological networks may thus become an important ingredient in the molecular characterization, diagnosis, and treatment of a broad range of human diseases, including numerous forms of cancer.

Compared to the explosive growth of network analysis methods in systems biomedicine, the application of network approaches to brain disease or brain injury is still in its infancy.” –O. Sporns [27]

 

“In contrast to engineered systems, biological systems rely on network mechanisms for robustness to extrinsic and intrinsic perturbations. One way to visualize robustness is to imagine a system in a stable state (an attractor) perturbed by a stochastic input or an internal fluctuation. A robust system will return to its original attractor or, if the perturbation is sufficiently large, transition to a new attractor. Network mechanisms may make the system more robust by limiting the effects of potentially disruptive perturbations and by preserving the attractor in the face of structural damage. Of the many mechanisms that support robustness in biological systems, the mechanisms of modularity and degeneracy are particularly relevant in a neural context.

Modularity limits the spread of, and helps to contain, the potentially disruptive effects of noisy perturbations. Structural and functional modules are key architectural ingredients in networks of highly evolved nervous systems. Modules are also ubiquitous in many other biological networks, such as networks of cellular regulatory elements or metabolic pathways. The importance of modularity in robustness extends to evolutionary and developmental processes.

Degeneracy is the capacity of a system to perform an identical function with structurally different sets of elements (Tononi et aI.,1999; Edelman and GaIly, 2001). Thus, a degenerate system can deliver constant performance or output even when some of its structural elements are altered, compromised, or disconnected. Unlike redundancy, degeneracy does not require duplication of system components. Degeneracy is ubiquitous in complex networks with sparse and recurrent structural connectivity. For example, communication patterns in such networks can occur along many alternative paths of equivalent length, a property that protects the network from becoming disconnected if nodes or edges are disrupted.

Jointly, modularity and degeneracy make brain networks functionally robust, by ensuring that the networks are stable to small structural perturbations. In addition, these concepts may underlie the remarkable capacity of the brain to withstand larger perturbations in the course of injury or disease. Clinical observations of patients suggest that individual brains have different degrees of "reserve" to counter degradations in their structural and functional networks. One theory of the concept of reserve suggests that passive reserve should be distinguished from active compensation. Passive reserve invokes an intrinsic capacity of the brain to withstand, up to an extent, the effects of structural damage. Individual brains may differ in their reserve capacity, due to differences in size or wiring pattern, with "high-reserve individuals" displaying more resilience against damage. In contrast to passive reserve, active compensation involves the capacity to engage in many different processing modes and to distribute functional networks to new locations in the brain if their structural substrates are compromised. Active compensation is closely related to the earlier discussed notion of degeneracy. Both passive reserve and active compensation are likely associated with efficient small-world network topologies and high dynamic diversity. It is an intriguing hypothesis that higher variability and degeneracy may predict greater robustness to injury or disease.” –O. Sporns [27]

 

“In many cases, the remarkable plasticity of the nervous system allows for substantial long-term improvement and sometimes complete restoration of functional deficits. These recovery processes represent a major challenge to network theories of the brain as they are the result of a complex interplay of physiological and behavioral processes and possibly deploy "brain reserve" to increase network resilience. Despite these complex structural and functional substrates, lesions of specific brain regions are often associated with specific cognitive and behavioral disturbances, and lesions of some areas tend to have more widespread effects than others…Studies have found that given the clustered, modular architecture of the mammalian cortex, loss of intercluster edges caused more severe disruptions while loss of intracluster edges had much less of an effect. Thus, the modular small-world architecture of the mammalian cortex showed a vulnerability pattern similar to that of a scale-free network such as the World Wide Web, with relative resilience to lesions of intracluster edges and relative vulnerability to lesions of intercluster edges, which are comparatively few in number. A subsequent study of node lesions in cat and macaque cortex confirmed that the pattern of structural damage of brain networks resembles that of scale-free networks, likely as a result of their modular architecture…Neural dynamic network simulation models have demonstrated that lesions of highly connected and highly central hub nodes produced the largest nonlocal lesion effects and that the extent of these lesion effects was largely determined by the modularity or com­ munity structure of the network. Lesions of connector hubs had the largest effects on functional connectivity and information flow as measured by patterns of interregional transfer entropy. Connector hubs had effects that extended beyond their immediate neighborhood and affected regions to which they were not directly connected. In contrast, lesions of provincial hubs (hubs whose central role was limited to a single module) had effects on other regions within the module, but not beyond. Lesions of peripheral nodes had little effect on information flow elsewhere in the network…One study (Achard, 2006) involved the placement of localized lesions around selected central locations defined by a standard brain coordinate. Around this central point, a fixed number of nodes (ROls) and their attached edges were removed from the structural matrix, and the spontaneous dynamics of the remaining brain were recorded and compared to the dynamic pattern of the intact brain. The functional impact of localized lesions” was then quantified b y determining the difference between the spontaneous functional connectivity of the intact and lesioned brain. Sequential node deletion revealed that the human brain structural network was resilient to random node deletions and deletion of high-degree nodes, but much less resilient to deletion of high-centrality nodes. Localized lesion analysis showed that the centrality of the removed nodes was highly predictive of the functional impact of the lesion. Among the most disruptive were lesions of structures along the cortical midline, including the anterior and posterior cingulate cortex, as well as those in the vicinity of the temporoparietal junction. Lesions of areas in primary sensory and motor cortex had relatively little impact on patterns of functional connectivity.

The general picture that emerges from these computational models is that the functional impact of lesions can be partially predicted from their structural embedding in the intact brain. Lesion of highly central parts of the network (network hubs) produces larger and more widely distributed dynamic effects. Furthermore, lesions of some particularly disruptive areas in our model are known to produce profound disturbances of behavior, cognition, and consciousness in vivo. Models such as these must be further refined to include lesions of white matter pathways and neuroplasticity-mediated recovery. Since these models cannot currently be tested for specific behavioral or cognitive deficits, the assessment of lesion impact is based on the assumption that spontaneous network activity is a diagnostic marker of global dynamic differences. This idea has to be further explored and validated in empirical studies of brain injury and damage. Nonetheless, all these models reveal distributed structural and dynamical effects of localized structural lesions.” –O. Sporns [27]

 

“The hypothesis that degenerative brain disease involves the disruption of structural and functional connectivity may not be limited to Alzheimer’s Disease, but may include a number of dementia syndromes distinguished by specific clinical profiles. Causes for the selective vulnerability of different large-scale brain networks remain to be determined…Tononi and Edelman (2000) advanced the idea that schizophrenia results from a disruption of reentrant interactions responsible for the functional integration of the activities of distributed brain areas that give rise to conscious experience. This proposal framed schizophrenia as a "disease of reentry" (Edelman, 1989), the result of a disturbance of cortical mechanisms of integration that underlie conscious mental processes. Tononi and Edelman used computer simulations of cortical integration to identify numerous physiological factors, including altered patterns of synaptic plasticity, that may lead to functional disconnection…If schizophrenia is associated with profound disruptions of large-scale functional interactions within the thalamocortical system, then patients with the disease should exhibit altered brain network topologies. The degree to which the topology was disrupted was found to be correlated with the duration of the disorder, suggesting a progressive time course for these network changes. Small­ world attributes of brain nodes in sections of the frontal, parietal, and temporal lobes exhibited significant differences, indicating that the disruption of large-scale networks in schizophrenia shows a regionally specific pattern.” –O. Sporns [27]

 

“Throughout his life, Alan Turing was fascinated by two major problems: the problem of mechanical computation leading to the construction of intelligent machines and the problem of biological growth, the "taking shape" or morphogenesis of biological matter. The theory referred to in this letter to the neurobiologist J. Z. Young was later published under the title "The Chemical Basis of Morphogenesis" and laid the foundation for mathematical models of biological pattern formation (Turing, 1952). Turing's theory of morphogenesis demonstrated how an initially homogeneous substrate of chemicals, consisting in the simplest case of just two compounds, an activator and an inhibitor, could "self-organize" into complex spatial patterns. Pattern formation was driven by two processes, the autocatalytic production of the activator and the differential diffusion of activator and inhibitor, resulting in the emergence of distinct spatial regions where one or the other compound prevailed. Turing developed his theory without ever having performed a single biological experiment. Given Turing's lifelong interest in the logical operations of the brain and in biological growth, it may have been only a matter of time for him to turn to a neural theory of circuit formation or network growth. Alas, a victim of prejudice and persecution that led to his premature death in 1954, Turing was not to see the coming revolution in modern biology with its stunning advances in the embryology and development of body and brain. [SL Note: It sounds like Turing was preoccupied with the major issue of plasticity, and what it might mean for intelligent machines…].” –O. Sporns [27]

 

“Turing's work on morphogenesis demonstrated the power of self­ organizing developmental processes in shaping biological organisms. Self-organization is found throughout the natural world, including in such diverse phenomena as the pigmentation patterns of seashells, cloud formation, sand ripples on a shallow sea floor, the symmetry of snowflakes, the branching patterns of rivers, or Jupiter's great red spot. Self-organization, the formation of patterns without any overt prespecification, also plays an important role in neural development. Morphogenetic mechanisms combine features of Turing's original proposal of reaction-diffusion systems with more recently discovered principles of gene regulation and transcriptional control. Gradients of morphogens can exert concentration-dependent effects that determine expression levels of proteins at the cellular level and thus influence elementary developmental processes such as cell migration, differentiation, and adhesion. Computational models of neural development have addressed processes operating at different stages, from the formation of the neural tube to neurite outgrowth, the formation of topographic maps, and the refinement and remodeling of synaptic connections. The combination of growth processes operating during embryonic development and a multitude of mechanisms of neuronal plasticity that continue throughout the lifetime of the organism shapes the topology of brain networks. It is impossible to fully understand or interpret the structure of brain networks without considering their growth and development.” –O. Sporns [27]

 

“A number of models have been proposed for the growth of the Internet or certain social networks. These more abstract models of network evolution can explain some of the statistical properties of real-world networks. Yet, their straightforward application to the brain is problematic because in most of these models growth processes do not explicitly depend on network dynamics or on the network's spatial embedding. In brain net­ works, however, structural and functional connectivity are highly interrelated. Not only does the topology of structural connections shape neuronal activity but structural networks are also subject to change as a result of dynamic patterns of functional connectivity. These ongoing changes in the strength or persistence of structural links underlie developmental patterns of functional connectivity observed in the human brain from childhood to senescence.” –O. Sporns [27]

 

“The occurrence of phase transitions in network growth is significant because the sudden appearance of new structural network properties may have consequences for the network's dynamic behavior. Phase transitions have been suggested as important steps in the spontaneous emergence of collectively autocatalytic sets of molecules in the origin of life. Their potential role in neural development is still entirely unexplored…Random growth and preferential attachment models do not offer a plausible mechanism for the growth of brain networks, largely because they fail to take into account neural activity and the spatial embedding of the brain. Spatial embedding of networks has important implications for their topology and growth because the addition of new nodes or new connections consumes limited resources such as space and metabolic energy…There is abundant evidence that the structure of brain networks is also shaped by ongoing and evoked neural activity…In cortical networks, structural and functional connectivity mutually influence each other on multiple time scales. On fast as well as slower time scales, structural connections shape the topology of functional net­ works. Conversely, functional connectivity can also mold synaptic patterns via a number of activity-dependent mechanisms. Thus, structure shapes neural activity, and activity in turn shapes structure. This mutual or "symbiotic" relationship is important in guiding the development of structural and functional brain networks. The mutual interdependency of network topology and dynamics in the brain is an example of what Gross and Blasius (2008) have referred to as "adaptive coevolutionary networks". In these networks, dynamic processes unfolding on a relatively fast time scale shape the topology of the network on a slower time scale. These changes in topology in turn alter the dynamics of the system. Many real-world networks incorporate these interdependencies, which are essential ingredients in their growth and development. For example, a traffic or communication network may experience congestion, a form of dynamic failure, which triggers efforts to construct new links to ease traffic flow. The brain is a particularly striking example of a network where fast dynamic processes continually shape and are shaped by the topology of structural connections…The many points of close apposition between dendrites and axons have been called potential synapses, and their number far exceeds the number of actually realized synaptic junctions. Potential synapses thus theoretically allow for a great number of new structural patterns that "lie in waiting" and could be rapidly configured by synaptic rewiring. Wen et al. (2009) have suggested that the complex branching patterns of neuronal dendrites are arranged such that the local repertoire of potential synapses is maximized while keeping the cost (length, volume) of dendrites low. As cellular mapping and reconstruction techniques begin to deliver cellular network topologies, even closer linkages between cell morphology and connectivity will likely be discovered.” –O. Sporns [27]

 

“Computational models, particularly connectionist approaches, have yielded many important insights into mechanisms of learning and development. However, they do not address how network growth and plasticity shape the emergence of neural dynamics supporting cognition. Most connectionist models focus on algorithms for learning that are implemented in simple multilayer neural networks. The arrangement of nodes and edges in connectionist models bears little resemblance to the complex network architecture of the brain. Furthermore, most connectionist models do not possess rich nonlinear dynamics, and their development mainly involves adjustments of synaptic weights without including physical growth or network evolution. To be fair, connectionist models focus on different problem domains, and they address a different set of questions about development. Their success lies in their ability to capture patterns of change in human behavior, and this success is a continuing challenge to more anatomically and biophysically based models of brain networks. Ultimately, we need realistic models that can show us how we can get from neural dynamics and network evolution all the way to behavior and cognition. The connectivity patterns and growth mechanisms of such models will be informed by neuroimaging studies of the human brain across different age groups, including mapping of structural and functional connectivity. Comprehensive structural network analyses across several developmental stages of the human brain are still lacking. Developmental changes in the myelination of long-range fiber pathways create additional challenges for diffusion imaging and tractography. Diffusion imaging has been employed for noninvasive measurements of white matter maturation indexed by fractional anisotropy and mean diffusivity in 1- to 4-month-old infants. The development of specific thalamocortical tracts in the cat brain was investigated with high­ resolution DSI. The method allowed for the delineation and three-dimensional imaging of several tracts including corticothalamic and corticocortical pathways across several months of postnatal development. Whole-brain structural networks of the developing human brain have not yet been comprehensively mapped.” –O. Sporns [27]

 

“Spontaneous neural activity can be recorded from human infants soon after birth. Five different resting-state networks were identified in the brains of preterm infants scanned at term-equivalent age, with fMRI data acquired during periods of sleep (Fransson et al., 2007). Resting-state networks appeared predominantly to link homologous areas in the two hemispheres, and the adult pattern of the default mode network, particularly the linkage between its anterior and posterior components, was not found. Fransson et al. suggest that the absence of a fully connected default mode network may reflect the relative immaturity of the infant brain's structural organization. Gao et al. performed resting-state fMRI recordings in healthy pediatric subjects between 2 weeks and 2 years of age. In very young infants, the default mode network was not yet fully represented, and additional components became linked at about 1 year of age. By 2 years of age, all major components of the default mode network appeared to be functionally connected. Throughout this early developmental period, a region comprising the posterior cingulate/precuneus and retrosplenial cortex occupied the most central position and was most strongly linked to other regions within the emerging default network. While Gao et al. (2009) found evidence for a relatively early emergence of the default mode network, results by Fair et al. (2008) argue for a slower developmental time course. Fair et al. (2008) found that the default mode network exhibits significant differences in children (ages 7-9 years) compared to young adults (ages 21-31 years). While interhemispheric connections between homotopic cortical regions were found to be strong in children, other linkages were significantly weaker than those in the adult network. Default regions were only sparsely connected, with most connections spanning the brain in the anterior-posterior direction entirely absent. Most functional connections were significantly weaker in children than in adults, an effect that could be due to weaker coupling or to coupling that is more variable across time. The latter explanation is less likely since a separate study using the same subjects and examining other, task related brain networks showed both increases and decreases in the strength of functional connectivity across time.” –O. Sporns [27]

 

“Endogenous brain activity results in resting-state functional networks that exhibit characteristic differences between children, young adults, and elderly adults. The topology of functional networks changes throughout development, adulthood, and aging. The first major developmental changes involve the emergence of robust and globally linked resting-state networks by a process that coordinates functional specialization with integration. In children, short-range functional interactions dominate while longer-range interactions appear later in adolescence. Late adulthood and aging are accompanied by a breakup of larger modules into smaller ones that are less well delineated and exhibit greater cross-linkage. Overall, the growing and the aging brain go through gradual rebalancing of functional relationships while preserving large-scale features such as small-world connectivity. Developmental trends in functional segregation and integration are strong candidates for potential neural substrates of cognitive change across the human life span…The topology of structural and functional brain networks changes profoundly during neural development, adulthood, and senescence, due to a multitude of mechanisms for growth and plasticity, operating on different cellular substrates and time scales. These processes also account for the resilience of brain networks against structural alteration and damage, topics considered in detail in the preceding chapter. As we learn more about the dynamics of connectivity, mounting evidence indicates that most connectional changes are not the outcome of environmental "imprinting," the permanent transfer of useful associations or linkages into the brain's wiring pattern. Instead, the picture is one of self-organization, the complex interplay between the formation of organized topology and ongoing neural dynamics. Nervous systems do not converge onto a final stable pattern of optimal functionality; rather, their connectivity continues to be in flux throughout life. As Turing noted in his paper on morphogenesis, "Most of an organism, most of the time, is developing from one pattern into another, rather than from homogeneity into a pattern"…Models of neural development are beginning to provide insight into the fundamental cellular and synaptic mechanisms that drive connectional change. Disturbances of these mechanisms are potential candidates in a variety of neurodevelopmental disorders, and they highlight the delicate balance that must be maintained for properly organized connectivity to emerge. One of the most puzzling aspects of structural brain networks at the cellular scale is their extraordinary propensity for rewiring and remodeling in the presence or absence of neural activity (Minerbi et al, 2009). Some studies suggest that individual synapses can change shape and even come into and go out of existence on time scales far shorter than those of some forms of memory. Many elements of the brain's "wiring pattern," or structural connectivity graph, at the scale of cells and synapses, appear to be highly dynamic, and the relative instability of individual synapses casts doubt on their reliability as sites of long-term modification and memory. If these observations are further extended and found to generalize across much of the brain, then processes that ensure some degree of stability or "functional homeostasis" at the level of the entire system will become of central importance. –O. Sporns [27]

Processes of self-organization also appear to underlie the highly variable yet robust nature of brain dynamics. It turns out that mechanisms of plasticity may play an important role in maintaining the networks of the brain in a dynamic regime that ensures high sensitivity to inputs, high information capacity, and high complexity.” –O. Sporns [27]

 

“Francisco Varela advanced a set of ideas that squarely aimed at characterizing mental states on the basis of physical events occurring in brain networks (Varela, 1995). He envisioned brain dynamics as the ongoing operation of a "transient coherency-generating process" that unified dispersed neural activity through synchronous relationships. The transience of the process is essential because it allows for "a continual emergence," an ongoing dynamic flow in a cognitive-mental space. According to Varela's theory, coherent patterns are assembled and dissolved depending upon changing conditions of input or task demand, and their configurations corresponded to sequences of mental states experienced at each moment in time. The brain's ability to self-organize and undergo transient state dynamics is crucial for its capability to simultaneously satisfy momentary demands posed by the environment and integrate these exogenous signals with the endogenous activity of brain and body. Integrated brain activity forms the neural basis for the unity of mind and experience.” –O. Sporns [27]

 

“In nearly all cases, the connection topology of the network plays an important role in the emergence of global or collective dynamic states…Heterogeneous coupling and multiscale dynamics are also ubiquitous features of the brain. Brain connectivity is organized on a hierarchy of scales from local circuits of neurons to modules of functional brain systems. Distinct dynamic processes on local and global scales generate multiple levels of segregation and integration and give rise to spatially differentiated patterns of coherence. Neural dynamics at each scale is determined not only by processes at the same scale but also by the dynamics at smaller and larger scales. For example, the dynamics of a large neural population depend on the interactions among individual neurons unfolding at a smaller scale, as well as on the collective behavior of large-scale brain systems, and even on brain-body-environment interactions. Multiscale brain dynamics can be modeled through mean-field approaches that bridge neural microstructure and macroscale dynamics ­ for example, neural mass models. Mean-field models mainly address dynamic effects at one scale by averaging over the dynamics of components at smaller scales. However, truly multiscale or scale-free dynamics requires the consideration of a nested hierarchy of linear and nonlinear dependencies.” –O. Sporns [27]

 

“Heterogeneous networks have shown hierarchical synchronization with high-degree nodes becoming most strongly synchronized and forming a dynamic "network core" that most closely reflected the global behavior and functional connectivity of the system…A recurrent theme in studies of collective behavior in complex net­ works, from epidemic to brain models, is its dependence on the network's multiscale architecture, its nested levels of clustered communities. The functional significance of the hierarchical nature of the brain's structural and functional connectivity is still largely unexplored. Computational studies suggest that nested hierarchies promote structured and diverse dynamics. An additional level of diversity results from activity­ dependent plasticity and structural alterations at the level of cells and synapses. Dynamic processes are not only shaped by network topology but also actively participate in the carving of structural connection patterns during development and in the continual adjustment of synaptic weights. Hence, dynamic diversity is likely accompanied by an as yet unknown level of diversity in synaptic patterns.” –O. Sporns [27]

 

“Neuronal activity unfolds on multiple time scales from fast synaptic processes in the millisecond range to dynamic states that can persist for several seconds to long-lasting changes in neural interactions due to plasticity. Over time, neural activity and behavior display variability, which can be due to a variety of sources. Some of these sources are considered "noise," because they give rise to random fluctuations and do not form "part of a signal", for example, the stochastic openings and closings of ion channels or subthreshold fluctuations in cellular membrane potentials. Much of neuronal variability, however, is not due to noise in molecular or cellular components but is the result of the deterministic behavior of the brain as a coupled system. This variability makes significant contributions to neuronal signals and is ultimately expressed in variable cognitive states and behavioral performance. For example, the ongoing fluctuations of endogenous neural activity that are characteristic of the brain's resting state have been shown to account for a significant part of the trial-to-trial variability of behavioral responses, and variable dynamics on multiple time scales is readily seen in large-scale computational models. What is the extent of this variability, how can we characterize it, and what are the network mechanisms by which it is generated?

Brain dynamics are inherently variable and "labile," consisting of sequences of transient spatiotemporal patterns that mediate perception and cognition. These sequences of transients are a hallmark of dynamics that are neither entirely stable nor completely unstable and instead may be called metastable. Metastable dynamics unfolds on an attractor that forms a complex manifold with many mutually joined "pockets" that slow or entrap the system's trajectory and thus create its intermittent, quasi-stable temporal behavior. Such a manifold may be visualized as a surface with numerous shallow indentations or wells. An object that moves along the surface will fall into a well, where it becomes trapped for a while before jumping out again. The wells represent metastable states that are occupied for some time, but not permanently, and that may be visited repeatedly. The transitions between these metastable states occur in intervals that are typically much longer than the elementary time constants of any of the system's components. In simulated neural systems, metastability is associated with sparse extrinsic connections that link modules or clusters, and its dynamics can be characterized by the entropy of its spectral density.” –O. Sporns [27]

 

“Wolfgang Maass proposed the idea of "liquid-state computing" as a model of neural processing based on trajectories in state space instead of fixed points (Maass et al., 2002). The term "liquid state" expresses an analogy with the physical properties of an excitable medium, such as a liquid, that can be transiently perturbed, with each perturbation leaving a characteristic dynamic trace. Liquid­state machines rely on the dynamics of transient perturbations for real-time computing, unlike standard attractor networks, which compute in sequences of discrete states. Applied to the brain, the model is based on the intrinsic and high-dimensional transient dynamics generated by heterogeneous recurrent neural circuits from which information about past and present inputs can be extracted. The model can be implemented in a generic neural microcircuit.” –O. Sporns [27]

 

“Relative phase between local oscillators represents an important collective variable that can characterize the dynamics of both brain and behavior. Collective variables that govern the coordinated behavior of interacting components are important ingredients in coordination dynamics. Their creation effectively involves a reduction in the dimensionality of the system. Integration of system components through coordination eliminates most available degrees of freedom and leaves only a few that are relevant and needed to describe the system's structure and time evolution. Thus, coordination greatly "simplifies" complexity. Several connectional features of brain networks promote dimension reduction. One example is the existence of modules, which blend together functional contributions from within their membership and compete with those of other modules. The first process is more consistent with phase synchrony or coherence while the second results in phase dispersion or scattering. Modules define the major axes of a low-dimensional dynamic space or manifold, which is traversed by the dynamic trajectory of the system in continual cycles of transience and stability.” –O. Sporns [27]

 

“Can dynamic diversity explain the flexibility of cognition? An important piece of evidence that supports this idea comes from an analysis of brain signal variability across developmental stages from children to adults. McIntosh et al. (2008) found increased variability in the brains of adults as compared to children and a negative correlation of this brain variability with the variability of behavioral performance. As brain and behavior become more mature, more stability in behavioral performance was accompanied by an increase in the variability of neural activity. In this study, greater functional variability and a larger repertoire of metastable functional states are found to be associated with more mature cognitive capacities.” –O. Sporns [27]

 

“Rapid transitions in global patterns of functional correlations have been observed in numerous electrophysiological studies, in both task-evoked and spontaneous neural activity. Spontaneous fluctuations in amplitude and coherence exhibit "heavy-tail" or power-law distributions, scale invariance across multiple frequencies, and transient long-range correlations. One of the first reports of scale-free phenomena in cortical potentials found long-range correlations and power-law scaling of amplitude fluctuations in MEG and EEG recordings. Walter Freeman reported the occurrence of episodes of intermittent phase locking at scales from millimeters to that of an entire cortical hemisphere in spontaneous human EEG recordings. Gong et al. (2003) recorded episodes of scale-invariant intermittent phase synchrony in human EEG. Starn and de Bruin (2004) detected scale-free distributions of global synchronization time in recordings of spontaneous EEG across several frequencies, ranging from the fast gamma to the slow delta band. Multiple studies have demonstrated the alternating occurrence of periods of phase shift or reset and phase synchrony or stability, with reset occurring typically within less than 100 milliseconds and periods of stability ranging from 100 milliseconds to seconds. Perturbations of ongoing fluctuations modify their scale-free characteristics by reducing the power-law exponent and diminishing long-range temporal correlations.

What is the origin of power-law fluctuations in cortical potentials? Several authors have attributed power laws in cortical dynamics to the existence of a self-organized critical state. Per Bak and colleagues suggested that complex behavior of dynamic and spatially extended systems can emerge spontaneously, as a result of the system's own self-organization. The system's complex behavior is characterized by scale-invariant fractal properties in its spatial patterning as well as scale-free distributions of dynamic events. Bak called this regime "self­organized criticality" (SOC) because systems naturally evolve toward this state and exhibit a fine balance of robust interactions and sensitivity to perturbations. A classic example is a pile of sand. As more and more grains of sand are added to the pile, it will grow and its slope will at first increase, until it reaches a critical value. At this "angle of repose" the addition of more grains of sand cannot increase the angle further. Instead, avalanches will occur that continually restore the critical angle. These avalanches come in all sizes, and their distribution follows a power law. When it occupies this type of dynamical regime, the system is said to display "critical" behavior. Importantly, in the case of the sandpile, the system reaches this critical state on its own, without the need for external tuning or parameter adjustment. The system maintains this critical behavior indefinitely by continually restoring the balance between its internal structure and external perturbations. A critical state is reached when a system evolving from an ordered into a disordered state approaches the "edge of chaos." Chris Langton (1990) studied cellular automata in ordered, critical, and chaotic dynamic regimes, and concluded that the computational capacity of these automata was greatest at the border between order and chaos, in a critical state. Models of interaction networks of cellular proteins also exhibited self-organized criticality, as did a number of neuronal network models (see below).The diversity of these models raised the intriguing possibility that SOC could explain the spontaneous emergence and stability of complex modes of organization in a wide variety of systems. All of the modeled systems exhibiting SOC shared certain attributes, for example, the existence of scale-free distributions of dynamic events (often called avalanches in keeping with the example of the sandpile), the presence of a phase transition taking the system from an ordered to a disordered regime, and the spontaneous and robust evolution toward criticality without the need to adjust or fine-tune system parameters.” –O. Sporns [27]

 

“The critical dynamic regime has many properties that are highly desirable for neural information-processing systems. Modeling of branching processes demonstrated that criticality is associated with maximal information transfer and thus with high efficiency of neuronal information processing. The critical regime also sustains a maximal number of metastable dynamical states. The parallel existence of many attractor states maximizes the network's capacity to store information. In addition, the critical regime allows neural systems to respond with optimal sensitivity and dynamic range to exogenous perturbations. "Liquid­ state" recurrent neural networks can perform complex computations only at or near the critical boundary separating ordered and chaotic dynamics. Furthermore, power-law distributions of size and duration of neuronal avalanches are indicative of long-range correlations across all spatial scales in the system. The critical state thus ensures that the system can access a very wide and diverse state space or functional repertoire.

If the critical state is indeed privileged in regard to information processing and dynamic diversity, then how might neural systems reach this state and how might they tune themselves in a self-organized manner to maintain it? Simple models of network growth result in a convergence of the network topology toward characteristic critical values. Other modeling studies suggest that neural plasticity may play an important role in generating and maintaining the critical state. For example, a spiking neural network model with dynamic synapses was found to exhibit robustly self­ organized critical behavior. A form of plasticity that is sensitive to the relative timing of presynaptic and postsynaptic responses, called spike-timing-dependent plasticity, can mold the connectivity structure of a globally connected neuronal network into a scale­free small-world network that resides in a critical dynamic regime. Even after the critical state is attained, spontaneous activity results in fluctuations in synaptic weights while global distributions of connection weights remain stable. Hsu and Beggs (2006) designed a neural model that converged on a critical dynamic regime through synaptic plasticity. Plastic changes accrued as a result of a homeostatic mechanism that preserved firing rate, resulting in network behavior that converged onto criticality and was stable against perturbations. Siri et al. (2008) investigated the effect of Hebbian plasticity on the capacity of a random recurrent neural network to learn and retrieve specific patterns. Plasticity results in profound changes of network behavior, leading the network from chaotic to fixed-point dynamics through a series of bifurcations. The sensitivity of the network to input is greatest while it occupies a critical regime "at the edge of chaos." The authors suggest that additional mechanisms of homeostatic plasticity may serve to stabilize the system within this functionally important state.

The relationship between self-organized criticality (SOC) in neuronal networks and their structural connection topology remains largely unresolved. A variety of growth, rewiring, or plasticity rules can give rise to SOC behavior, but it is unclear if SOC can occur regardless of connection topology, or whether some network architectures selectively promote its occurrence.” –O. Sporns [27]

 

“Most readers would probably agree with the statement that the brain is extraordinarily complex. However, there is considerably less agreement as to how complexity can be defined or measured in the brain or elsewhere. So far, it has proven difficult to identify a common theoretical foundation for the many manifestations of complexity in systems as diverse as societies, cells, or brains, and the existence of a general theory of complexity is still in question. Nevertheless, it is undeniable that many complex systems have certain common characteristics, one of which is a mode of organization that is reminiscent of "hierarchical modularity". As Herbert Simon noted in 1962, many complex systems are hierarchically organized and composed of interrelated subsystems, which themselves may have hierarchical structure (Simon, 1962), defined by nested clusters of strong or dense interactions. Importantly, interactions within subsystems are stronger than interactions among sub­ systems, thus rendering the system "nearly decomposable" into independent components. In such nearly decomposable systems, "the short-run behavior of each of the component subsystems is approximately independent of the short-run behavior of the other components," and "in the long run the behavior of any one of the components depends in only an aggregate way on the behavior of the other components" (Simon, 1962). Simon pointed out that in complex systems "the whole is more than the sum of the parts" such that "given the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole". All complex systems contain numerous components that engage in organized interactions and give rise to "emergent" phenomena. These phenomena cannot be reduced to properties of the components. Reductionist approaches have only limited success when applied to complex biological systems. For example, a recent review on cellular networks states that "the reductionist approach has successfully identified most of the components and many interactions but, unfortunately, offers no convincing concepts and methods to comprehend how system properties emerge" (Sauer et al., 2007). The authors continue to propose that "[ . . . ] the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously, and by rigorous data integration with mathematical models," the research program of the emerging discipline of systems biology (Kitano, 2002). The highly interconnected, hierarchical, and dynamic nature of biological systems poses a significant experimental and theoretical challenge, one that is not adequately addressed by the reductionist paradigm. However, what exactly is complexity, and how can it help us to better understand the structure and function of nervous systems? Complexity describes systems that are composed of a large number and a great variety of components. In addition, complexity refers to a mode of organized interaction, a functional coherence that transcends the intrinsic capacities of each individual component. The pervasiveness of complexity in the brain raises the question of whether a better understanding of complexity offers important clues about the nature of brain function and whether it can inform us about how nervous systems are structurally and functionally organized.” –O. Sporns [27]

 

I argue that the union or coexistence of segregation and integration expressed in the multiscale dynamics of brain networks is the origin of neural complexity. I attempt to define it more formally on the basis of how information is distributed and organized in the brain. While this definition of complexity depends mostly on statistical aspects of neural interactions, one may ask if there are specific patterns or motifs in structural connectivity that favor or enable the emergence of highly complex dynamic patterns. I identify some candidates for such structural patterns and compare them to our current knowledge of how brains are anatomically organized. I examine the relationship of neural complexity to consciousness and explore the potential evolutionary origins of complexity in the brain… For a system composed of such elements to be capable of complex behavior, the behavior of individual components must partly depend on that of other elements in the system, that is, the system must be "nearly decomposable" with "weak links" between components that can serve as the basis for system-wide coordination and emergence. The brain is a good example of a system that consists of components nested across multiple hierarchical levels, including neurocognitive networks, individual brain regions, specialized neural populations, and single neurons… Interactions between components integrate or bind together their individual activities into an organized whole. They create dependencies between components, and they also affect the component's individual actions and behaviors. Interactions are often shaped by structured communication paths or connections. These connections can vary in their sparseness and strength, and their specific pattern has an important role in determining the collective behavior of the system. Different network topologies can give rise to qualitatively different global system states. In the brain, interactions are relayed by structural connections, and they can be further modulated intrinsically by diffuse chemical signals or extrinsically by statistical relationships in environmental stimuli… The interactions of components in a nearly, but not fully, decomposable system generate phenomena that cannot be reduced to or predicted from the properties of the individual components considered in isolation. Sequences of amino acids in peptide chains give rise to three­dimensional folding patterns of proteins that determine their functional properties. Predation and competition among species control their survival and reproduction within ecosystems. Geographic dispersion, specialization of skills, and social stratification of individual humans shape their societal and economic organization. These emergent phenomena cannot be fully explained by dissecting the system into components, nor can their full functionality be revealed by an examination of isolated components or interactions alone. In many cases, different levels of scale interact. Local coupling shapes emergent global states of the system, which in turn can modify the internal structure of components or reconfigure their interactions…While there is general agreement that complex systems contain numerous components whose structured interactions generate emergent phenomena, their empirical observation poses many challenges. Systematic observation of complex systems requires that the system be sensibly partitioned into components and interactions whose states can be tracked over time. Defining components and interactions, or nodes and edges in the language of complex networks, requires a number of choices about relevant spatial and temporal scales, resolution, or empirical recording methods, all of which can influence the nature of the reconstructed observed dynamics. This subtle but important point is often neglected. Unlike idealized systems such as cellular automata or spin glasses, where the elementary components and their interactions are exactly defined, most real-world systems contain components that blend into each other, form nested hierarchies, come into or go out of existence, and engage in dynamics on multiple time scales. In such systems, choices about how components are defined and observed must be carefully justified, because they can impact the computation and interpretation of network or complexity measures.” –O. Sporns [27]

 

“Despite broad agreement on some of the defining features of complexity, there is currently no general way to measure or estimate the complexity of an empirically observed system. Numerous complexity measures have been defined, usually within the context of a specific application or problem domain. This heterogeneity reflects the nascent state of the field of complexity theory, as well as real differences in the way complexity is conceptualized in physical, biological, or social systems. Measures of complexity define a common metric that allows different systems or different instantiations of the same system at different points in time to be compared to one another. Such comparisons make sense only for systems that are structurally or functionally related. For example, a comparison of the complexity of two nervous systems in different states of endogenous or evoked activity may reveal meaningful differences in their dynamic organization while it makes little sense to quantitatively compare the complexity of a cell phone network with that of a network of interacting proteins.” –O. Sporns [27]

 

“There are two main categories of complexity measures. Measures in one category measure complexity by how difficult it is to describe or build a given system. Within this category, measures of complexity based on description length generally quantify the degree of randomness, and while they have had significant applications in physics and computation, they are less interesting in a biological and neural context. One of these measures, algorithmic information content, defines complexity as the amount of information contained in a string of symbols. This information can be measured by the length of the shortest computer program that generates the string. Symbol strings that are regular or periodic can be computed by short programs and thus contain little information (low complexity) while random strings can only be generated by a program that is as long as the string itself and are, thus, maximally complex. Other measures of complexity such as logical depth or thermodynamic depth are related to algorithmic information content in that they become maximal for systems that are "hard to build" or whose future state is difficult to predict. Thus, these measures evaluate the length or cost of a system's generative process rather than its actual dynamics or its responses to perturbations.

A second category of complexity measures captures the degree to which a system is organized or the "amount of interesting structure" it contains, and these measures are highly relevant in the context of biological and neural systems. Several different measures exist within this category, and most of them have in common that they place complexity somewhere in between order and disorder. In other words, complex systems combine some degree of randomness and disorganized behavior with some degree of order and regularity. Complexity is high when order and disorder coexist, and low when one or the other prevails. How do order and disorder manifest themselves in a neural context? One way to create a neural system that is highly disordered is to isolate its components from one another so that each of them acts independently. In such a system, all components only express their own response preferences and are maximally specialized (segregated). A neural system that is highly ordered might be one where all components are strongly coupled to one another to the point where the system becomes fully synchronized and integrated. In this case, the interactions have overwhelmed any local specialization and the system acts as if it were composed of only one independent component. Clearly, neither of these extreme cases of order and disorder corresponds to the type of organization seen in any real nervous system. Instead, a mixture of order and disorder, of randomness and regularity, segregation and integration, prevails in brain structure and function.

Order and disorder are closely related to the concepts of information and entropy, and it is therefore no surprise that many measures of complexity that quantify the degree of organization, regardless of where they are applied, use information as their basic building block. A foundational measure of information theory is entropy, whose origins trace back to thermodynamics. In Boltzmann's formulation, entropy links the macro state of a system (e.g., its temperature) to a probability distribution of its microstates (e.g., the kinetic energy of gas molecules). In the context of Shannon's information theory (Shannon, 1948), the entropy of a system is high if it occupies many states in its available state space with equal probability. In that case, an observation of the state of the system provides a high amount of information because the outcome of the observation is highly uncertain. If the system visits only very few states, then its entropy is low and its observation delivers little information.

Several measures of complexity as organization have been proposed, including effective complexity and physical complexity. Effective complexity measures the minimum description length of a system's regularities and attempts to distinguish features of the system that are regular or random. As such, it is a formal measure of the system's information content resulting from its intrinsic regularities, but it cannot easily be obtained from empirical observations. Physical complexity specifically addresses the complexity of biological systems. Chris Adami has argued that the complexity of a biological organism must depend crucially on the environment within which it functions. Therefore, the physical complexity of a biological system can be understood as the mutual information between gene sequences (genomes) and the ecological environment within which they are expressed. The physical complexity of a given genetic sequence is the amount of information it encodes about the environment to which the organism is adapting. Physical complexity of an organism therefore depends on the ecological context within which the organism has evolved. An application of this measure of complexity to the nervous system has not yet been attempted, but it might involve an estimation of how much structured information about an environment is captured in regularities of brain structure or function.

While all of these measures of complexity highlight interesting aspects of various physical and biological systems, none seem particularly well suited for quantifying the amount of organization or complexity encountered in neural systems. What are the key markers of complexity in the brain? Network analyses have consistently pointed to the importance of segregation and integration in the structural and functional organization of the brain. Structurally, segregation and integration are enabled by the small-world modular architecture of brain networks. Functionally, the interplay of specialized and integrated information is what enables the variety and flexibility of cognition. Segregation and integration are essential organizational features of brain networks. As I argue, they can be quantified with information-theoretic approaches, and the coexistence of segregation and integration is a major determinant of neural complexity.” –O. Sporns [27]

 

“Segregation and integration in the dynamic patterns of functional and effective brain connectivity can be defined in terms of statistical dependencies between distinct neural units forming nodes in a network. Statistical dependencies can be expressed as information, and functional and effective connectivity essentially quantify how information is distributed, shared, and integrated within a network. A first step is to look at pairwise interactions and characterize them in terms of information and entropy. The information shared by two elements, their mutual information, expresses their statistical dependence, that is, how much information the observation of one element can provide about the state of the other element. It is defined as the difference between the sum of the two individual entropies and the joint entropy. If no statistical dependence exists between the two elements, then observing the state of one element provides no information about the state of the other, and the mutual information is zero. Unlike correlation, which is a linear measure of association between variables, mutual information captures linear and nonlinear relationships. Importantly, mutual information does not describe causal effects or directed dependencies between variables.

A multivariate extension of mutual information, the integration of the system, measures the total amount of statistical dependence among an arbitrarily large number of elements (Tononi et al., 1994). Integration is mathematically defined as the difference between the sum of the entropies of the individual units and their joint entropy. Like mutual information, integration always takes on positive values or is equal to zero. Zero integration is obtained for a system whose elements behave independently. In such a system, knowledge of the state of any of its elements provides no information about the states of any of the other elements, and the joint entropy of the system is therefore exactly equal to the sum of the individual entropies. If there is any degree of statistical dependence between any of the elements, then the joint entropy of the system will be smaller than the sum of all individual entropies, resulting in a positive value for integration.

This formalism for integration signals why we are interested in applying it to functional brain networks. Dynamic coupling is usually defined as a statistical dependence (linear or nonlinear), and a measure of integration should be able to quantify such dependencies between arbitrary numbers of neural units. Furthermore, integration seems well suited to serve as a building block for assessing the balance between segregation (statistical independence) and integration (statistical dependence). The modular and hierarchical nature of brain networks requires a formalism that is sensitive to segregation and integration at multiple scales. To that end, we consider the integration of subsets of elements of a given system across all scales, ranging from subsets of sizes 2, 3, and so on up to the size of the full system. Statistical dependencies that reside at one or several spatial scales can thus be captured in a single measure, which we called neural complexity (Tononi et al., 1994; 1998). The hierarchical nature of neural complexity is inherently well suited for a system such as the brain, which exhibits modularity at several different levels. Neural complexity captures the amount of structure or organization present within the system across all spatial scales. It takes on low values for systems whose elements behave independently from one another. These systems are characterized by high segregation (each element is informationally encapsulated) but very low integration (absence of dynamic coupling). Neural complexity also takes on low values for systems whose elements are fully coupled. These systems contain very little segregation (all elements are behaving identically) but are highly integrated because of strong global coupling. Only systems that combine segregation and integration generate high complexity.

A closer analysis reveals that the measure can be identically formulated in terms of the distribution of mutual information across all bipartitions of the system, where "bipartitions" refers to a way of dividing the system into two complementary parts. Expressed in these terms, the neural complexity of a system is high when, on average, the mutual information between any subset of the system and its complement is high. High mutual information between many possible subsets of a system indicates a diverse set of statistical dependencies between the different portions of an integrated system. Thus, complexity emerges when rich and dynamic contextual influences prevail, and complexity is low when such influences are either completely absent (as in systems that engage in random activity) or completely homogeneous (as in systems that are highly regular).

Extensions of neural complexity that take into account the external inputs and outputs of a system have been proposed (Tononi et al, 1996; 1999). To capture the effects of inputs on dynamics, we considered that one of the effects of an external perturbation consists of changing the pattern of intrinsic correlations. Stimuli that are discordant with the intrinsic dynamic organization of the system will have little effect, because they do not "integrate well" with the system's spontaneous activity. Other stimuli may enhance a distinct set of statistical relationships within the system. In the former case, the intrinsic complexity of the system, as defined by its internal statistical structure, remains unchanged, while in the latter case, it is selectively increased. A statistical measure called matching complexity (Tononi et al., 1996) quantifies the effect of a stimulus on the distribution of segregated and integrated information in a complex network. The measure explicitly evaluates the informational gain resulting from a sensory perturbation of an endogenously active network. Network outputs are considered in the context of degeneracy, a key ingredient of network robustness.” –O. Sporns [27]

 

“A computational approach similar to an evolutionary algorithm allows the systematic exploration of the relationship between structural topology and dynamics…Effectively, the procedure searches for systems that optimally satisfy the cost function within a high-dimensional parameter space… A series of computational studies explored the link between connection topology and neural dynamics for a simple variant of linear systems. Consistently, networks optimized for high entropy, integration, and complexity displayed characteristic network topologies. Only networks that are optimized for high complexity show patterns that resemble those observed in real cortical connection matrices (Sporns et al, 2000a; 2000b; Sporns and Tononi, 2002). Specifically, such networks exhibit an abundance of reciprocal (reentrant) connections and a strong tendency to form modules interlinked by hub nodes. The rise in complexity during network optimization is paralleled by the appearance of high clustering and short path lengths, arranged in a modular small-world architecture (Sporns et al., 2000a; Sporns and Tononi, 2002). The resulting connection topologies can be wired efficiently when the network nodes are embedded in three-dimensional space. Hierarchical modularity and self-similar "fractal" connection patterns also promote high complexity (Sporns, 2006), a result that further supports the idea that hierarchical networks are associated with complex critical dynamics.” –O. Sporns [27]

 

“A science of the brain that does not account for subjective experience and conscious mental states is incomplete. Consciousness, long the domain of philosophers and psychologists, has finally become a legitimate topic of neuroscientific discourse and investigation. The search for "neural correlates of consciousness" has delivered a plethora of observations about the neural basis of the phenomenon (Crick and Koch, 1998b; Rees et al., 2002; Tononi and Koch, 2008). We know that certain brain regions, notably the cerebral cortex, are more important for conscious­ ness than others and that the presence of neural activity alone is insufficient to create it since we lose consciousness every time we sleep. Yet, no amount of empirical data alone can answer fundamental questions about why and how certain physical processes occurring in neural tissue can generate subjective experience. As Giulio Tononi has argued, empirical studies must be complemented by a theoretical framework (Tononi, 2008).

William James famously referred to consciousness as a continuous process or stream: " Consciousness [ . . . ] does not appear to itself chopped up in bits. Such words as 'chain' or 'train' do not describe it fitly as it presents itself in the first instance. It is nothing jointed; it flows. A 'river' or a 'stream' are the metaphors by which it is most naturally described" (James, 1890).' The phenomenology of consciousness highlights several of its key properties, the integration of the many facets of subjective experience into a unified mental state, the high level of differentiation of each of these states seemingly drawn from an inexhaustible repertoire of possible mental configurations, and the dynamic flow of highly integrated and differentiable states on a fast time scale. Tononi and Edelman (1998) have argued that these dynamic and integrative aspects of consciousness require a unified neural process-specifically, reentrant interactions between distributed regions of the thalamocortical system. The dynamic reciprocal coupling of neural activity provides the neural substrate for rapid and flexible integration, while at the same time maintaining differentiated neural states drawn from a large repertoire. The coexistence of high integration and high differentiation can be formally expressed using measures of statistical information, for example, the measure of neural complexity defined earlier. High complexity in a neural system is attained if the system allows for a large number of differentiable states and at the same time achieves their functional integration by creating statistical dependencies that bind together its various individual components. Dynamically bound neural elements that evolve through a rich state space form a functional cluster or "dynamic core" (Tononi and Edelman, 1998; Edelman and Tononi, 2000). The boundaries of the core define the extent of the neural substrate encompassing a particular conscious experience. Neural elements outside of the core cannot contribute to it as they are not functionally integrated.

An essential aspect of the dynamic core is that it must be able to select, based on its intrinsic interactions, its own causal flow, the series of transitions between states within a large repertoire of possibilities. A core capable of selecting from among only a handful of states does not generate consciousness. The core must possess high complexity, that is, the interactions of its elements must create high amounts of information. As discussed earlier, a major (but not the only) factor promoting high complexity is the arrangement of structural connections that shape the statistics of neural dynamics. However, a single instance of a structural network can transition from high to low complexity and from high to low consciousness, as in the transition from waking to sleep, deep anesthesia, or epilepsy. These transitions can be caused by over- or under­activity of individual brain regions or the actions of neuromodulatory systems.

Giulio Tononi developed an extended theoretical framework for addressing the two main problems of consciousness, dealing with the quantity or level of consciousness expressed in a given system and with its quality or content (Tononi, 2004). The central proposal of the theory is that consciousness corresponds to the capacity of a system to integrate information. This capacity is determined by the coexistence of differentiation (a large number of possible states forming a dynamic repertoire) and integration (accounting for the unity of experience). The capacity for information integration can be measured as the amount of causally effective information that can be integrated across a minimal bipartition, called Phi (Tononi and Sporns, 2003). The value of Phi depends in large part on the topology of the system's structural connectivity. A system that can be divided into two completely separate modules would, as a whole, have zero capacity to integrate information. In turn, a system with high effective information across all its bipartitions will have high Phi. What kinds of structural connection patterns emerge if networks are optimized for high Phi? Optimization of Phi resulted in networks composed of a heterogeneous arrangement of structural connections such that each network element had a unique connectional fingerprint (indicative of functional specialization or segregation) and was highly interactive with all other elements in the network (high functional integration). Tononi's information integration theory predicts that consciousness depends solely on the capacity of a physical system to integrate information and that it is independent of other properties that are often associated with consciousness, such as language, emotion, a sense of self, or immersion in an environment. However, the theory recognizes that in order for neural circuits capable of high Phi to arise, a physical system may have to go through individual development and learn about regularities in its sensory inputs through experience-dependent plasticity and embodiment.

Information integration as captured by Phi relies on a measure of effective information which, unlike mutual information, reflects causal interactions. Causal interactivity can also be estimated from actual neural dynamics, for example, with Granger causality or transfer entropy. Anil Seth has suggested a measure called causal density, which is computed as the ratio of the total number of significant causal interactions out of all possible ones (Seth, 2005; 2008). The measure can capture both functional segregation and integration since it is sensitive to the level of global coordination within a system (the degree to which its elements can affect each other) as well as its dynamic heterogeneity. Since it considers temporal precedence cues to compute the strengths of causal (directed) influences, causal density can detect interactions that are "smeared over time" and not necessarily instantaneous. The relationship of causal density and network topology is still relatively unexplored. An initial study indicates that high values for causal density may be associated with small-world networks (Shanahan, 2008).

The idea that a "dynamic core" of causally interacting neural elements is associated with consciousness is also reflected in several related theoretical proposals (Shanahan, 2010). For example, Bernard Baars global work space theory (Baars, 2005) posits that consciousness depends on the existence of a central resource (the global work space) that enables the distribution of signals among specialized processors that by themselves are functionally independent from each other and informationally encapsulated (cognitive modules). Mental content is determined by which of these modules gain access to the global work space. Within the global work space, sequences of serially organized integrated states occur and correspond to sequences of conscious mental events. A dynamic approach related to global work space theory has pointed to potential neural substrates, for example, a "neuronal global work space" where sensory stimuli can trigger global and large-scale patterns of integrated neural activity (Dehaene et al, 2003; 2006). A sensory stimulus gains access to consciousness if it succeeds in activating a set of central work space neurons, thought to be preferentially localized to the prefrontal and cingulate cortex. The strength of a sensory stimulus, as well as the presence of "top-down" attentional modulation, contributes to its conscious perception.” –O. Sporns [27]

 

“Consciousness emerges from complex brain networks as the outcome of a special kind of neural dynamics. Whether consciousness is an adaptation and has been selected for during evolution remains an open question, particularly when we consider this issue in the context of the biological evolution of brain networks. It is possible, then, that consciousness arose as a result of evolving patterns of neural connections that were shaped by competing needs for economy in design, for efficiency in neural processing, and for diverse and complex neural dynamics. Consciousness, as we currently find it in the natural world, requires a physical substrate (a network) to generate and integrate information, but it may not depend on the specific biological substrate of the brain. Can consciousness be created artificially or at least emulated in systems that use neither neurons nor synapses? Is it possible to create machine consciousness, perhaps capable of reaching levels that cannot be attained by biological organisms? If consciousness does indeed emerge as a collective property of a complex network, then these questions must be answered affirmatively. Machine consciousness may be within our reach (Koch and Tononi, 2008).” –O. Sporns [27]

 

“Does complexity itself evolve? Does evolution drive organisms and their nervous systems toward greater complexity? Does a progressive increasing complexity, should it actually occur, signify purpose and necessity behind the evolutionary process? These are charged questions that have been answered in different ways by different authors, and not always entirely on the basis of empirical facts. There is little doubt that the complexity of living forms, their morphology and behavior, has on average increased over time, but is this increase the manifestation of purpose and direction in evolution or the result of an accidental history? An eloquent proponent of the latter view, Stephen J. Gould, attributed the observed trend toward an increase in biological complexity to the existence of a lower limit, below which viable organisms cannot exist, combined with an increase in variation (Gould, 1996). According to Gould, the combination of these two factors makes systems diverge away from the lower limit, thus leading to an average increase in complexity. Others have taken the opposite view, attributing observed trends toward greater complexity to increased adaptation and natural selection (e.g., Bonner, 1988; Dawkins, 1996; Adami, 2002).

Even when leaving aside the teleological aspects of the questions posed above, we are still left with the difficult problem of explaining how something as complex as the mammalian or human brain evolved from the much simpler nervous systems of creatures alive hundreds of millions of years ago. I tried to shed light on the evolution of complex brain networks, and I concluded that not all properties of such networks are adaptations but that some architectural features likely have simpler explanations such as physical growth processes and allometric scaling. The answer to the question of how complexity has evolved may not be revealed entirely by the neural substrate itself but also depend on the interactions between organism and environment. I surveyed the many sources of diverse and variable neural dynamics and discussed the potential benefits of this dynamic diversity for creating a large repertoire of internal states and a rich capacity to react to external perturbations. Hence, the dynamic diversity of nervous systems makes a fundamental contribution to the organism's adaptive success. The observed trend toward an increase in the complexity of a nervous system, expressed in its structural and functional connectivity, may partly be the result of an increase in the complexity of the organism's environment, which is composed of a mixture of statistical regularities and randomness. Neural complexity confers an adaptive advantage because it enables a greater range of response and a greater capacity for generating and integrating information about the external world as accessed through the senses.

The link between brain and environment becomes even more intricate when one considers that the environment of an organism cannot be objectively defined in terms of physical properties alone. In the words of Richard Lewontin, "The organism and the environment are not actually separately determined. [...] The environment is not an autonomous process but a reflection of the biology of the species" (Lewontin, 1983). The biological form, its morphology and behavior, creates its own environment by virtue of its complement of sensors and effectors and by actively shaping the statistics of its sensory input. Abstract models of brain/environment interactions support the idea that neural complexity reaches higher values when the statistical structure of environmental stimuli contains a mixture of order and disorder, that is, high complexity. The informational gain produced within a complex network resulting from a sensory input can be quantified with matching complexity, a measure of the transient gain in network complexity due to a perturbation by a stimulus (Tononi et al., 1996). Optimizing matching complexity in simple networks strongly favors increased complexity of spontaneous neural activity (Sporns et al., 2000a, 2000b). Repeated encounters with structured inputs reorganize intrinsic connections in a way that endogenously recapitulates salient stimulus features. More complex stimulation thus naturally leads to more complex intrinsic dynamics…These results are consistent with the idea that increases in neural complexity may be driven by a greater repertoire of behavioral actions and sensory stimuli. The rudimentary nature of behavior and environment in these simple models raises questions about the generality of these conclusions. A more complete exploration of the origin of neural complexity requires the use of sophisticated software platforms capable of simulating more realistic forms of artificial evolution. –O. Sporns [27]

 

“Direct optimization for complexity generated organisms whose complexity far exceeded both driven and passive conditions, but their behavior evolved in a direction that would be maladaptive if natural selection would prevail. These simulation results obtained within the computational ecology of “Polyworld” suggest that neural complexity will emerge in the course of natural selection if it is of evolutionary advantage, but it is not optimized in any simple-minded sense of the word. Instead, once the neural complexity of a population of agents is sufficient to support their continued survival, it remains stable, until further evolutionary change takes place. Further increases in complexity then depend on increases in the complexity of the environment, resulting in an expansion of the world's ecospace.” –O. Sporns [27]

 

“Cognition is generally thought to involve neural activity and its continual propagation and transformation within the brain – patterns of neural activity causing other patterns of neural activity through networked interactions that underlie information processing. However, neural patterns can cause other neural patterns also by way of bodily actions and movements, for example, those that select and structure sensory inputs. Hence, functional brain networks are powerfully reconfigured as a result of sensory events in the real world that are the outcome of brain activity manifested as environmental change. The networks of the brain extend outwards, to the sensors and effectors of the body and into the physical world.” –O. Sporns [27]

 

“When we think of brain networks, we think of neurons that are connected to other neurons and of the patterned flow of neural activity among cell populations and brain regions that underlies neural information processing. However, structural connections are not the only means by which neurons can causally affect the activity of other neurons. Another way in which neural states can cause other neural states is through the environment, as a result of bodily movement that causes changes in sensory inputs. Historically, this point formed a key rationale for the cybernetic approach to brain function. Norbert Wiener noted that cybernetics must take into account the "circular processes, emerging from the nervous system into the muscles, and re-entering the nervous system through the sense organs" (Wiener, 1948) and thus cannot view the brain as "self-contained." W. Ross Ashby emphasized that organism and environment must be treated as a single system, and that "the dividing line […] becomes partly conceptual, and to that extent arbitrary" (Ashby, 1960). Humberto Maturana and Francisco Varela extended the idea in a different direction, describing the brain as a "closed system in which neuronal activity always leads to neuronal activity," either through a network of interacting neurons or through linkages between sensors and effectors that extend through the environment (Maturana and Varela, 1980). By acting on the environment, the brain generates perturbations that lead to new inputs and transitions between network states. Environmental interactions thus further expand the available repertoire of functional brain networks…Nervous systems function, develop, and evolve while connected to the body's sensors and effectors. Sensors relay information about exogenous signals that perturb network states of the brain, and network activity triggers effectors, resulting in bodily movement and the repositioning of sensors. Hence, the body forms a dynamic interface between brain and environment, enabling neural activity to generate actions that in turn lead to new sensory inputs. As a result of this interaction, patterns of functional connectivity in the brain are shaped not only by internal dynamics and processing but also by sensorimotor activity that occurs as a result of brain-body-environment interactions.” –O. Sporns [27]

 

“Simon's "ant on the beach" and Braitenberg's synthetic autonomous agent "vehicles" illustrate the inseparability of brain, body, and environment. Complex behavior is the result of their interaction, not the end product or readout of centralized control. The coupling between brain, body, and environment has become a cornerstone of the theoretical framework of "embodied cognition." The rise of embodied cognition acts as a counterweight to more traditional approaches to artificial intelligence (AI), first developed in the 1950s, that emphasized symbolic representations and computation. According to embodied cognition, cognitive function is not based on symbolic computation but rather is shaped by the structure of our bodies, the morphology of muscles and bones, hands and arms, eyes and brains. Most theories of embodied cognition incorporate the notion that coherent, coordinated, or intelligent behavior results from the dynamic interactions between brain, body, and environment. Cognition does not occur all in the head – instead it stretches beyond the boundaries of the nervous system. Andy Clark has made a compelling argument that the minds of highly evolved cognitive agents extend into their environments and include tools, symbols, and other artifacts that serve as external substrates for representing, structuring, and performing mental operations (Clark, 2008). If this view of cognition as extending into body and world is correct, then cognition is not "brain bound" but depends on a web of interactions involving both neural and nonneural elements. The networks of the brain fundamentally build on this extended web that binds together perception and action and that grounds internal neural states in the external physical world.” –O. Sporns [27]

 

“The failure of traditional AI to solve unconstrained real-world problems spurred the development of new approaches to robotics that explicitly addressed interactions between a robot, its control architecture, and a dynamic environment. Turning away from the prevailing paradigm of centralized control, Rodney Brooks argued that "coherent intelligence can emerge from independent subprocesses interacting in the world" (Brooks, 1991). Hence, the design of intelligent systems requires working with "complete agents," fully embodied systems that are autonomous in their actions and are situated and embedded in an environment. Brooks envisioned a modular rather than serial organization for the internal control architecture, in which each module has access to sensory input and motor output, and where coordinated behavior emerges from the interaction of these modules meditated by both brain and body, situated in the real world. Variations of decentralized control have been successfully implemented in robot models of various types of movement and locomotion (walking, running, crawling, etc.), manipulation of objects, and recognition and categorization, as well as models of imitation and social interaction. Many of the robot models employed in this work were directly inspired by specific biological systems, for example, cricket phonotaxis, landmark-based homing behavior of ants and bees, insect walking, and amphibious movements of the salamander. Other models attempted to emulate complex cognitive abilities. One such model involved the construction of a humanoid robot equipped with sensors and effectors for real-world sensorimotor activity, and a modular control system for vision and sound, balance and posture, recognition and motor control. What all these models and robots have in common is that they act autonomously in the real world. Building such systems is extraordinarily revealing about the relations between neural control and bodily action, the role of material properties and movements of sensors in delivering useful information, and the dependency of cognitive processes on sensorimotor interactions.

Rolf Pfeifer and colleagues formulated a set of principles that underlie the operation of complete embodied agents (e.g., Pfeifer and Bongard, 2007). All such agents share a number of properties. They are subject to physical laws that govern the function of their control architectures as well as their bodies. They act on their environment and, through their actions, generate sensory inputs. Their brains and bodies form a single dynamical system with attractor states that are configured partly through interactions with the environment. Finally, their body joins in the task of information processing by performing functions that otherwise would have to be performed by the brain. This last property, Pfeifer refers to as "morphological computation". Consider locomotion or walking. A robot built according to traditional AI applies complex control algorithms to maintain posture and stability. As a result, its movements appear sluggish, stiff, and unbiological, and its algorithms are slow to adapt to changes in terrain, surface properties, physical load, or energy supply. In contrast, animals exploit not only neural control but also the physical and material properties of their bodies to achieve stable and adaptive motion. The compliant "hardware" of arms and legs, their sensor-rich muscles, tendons, and joints, participate in the dynamics of movement and promote stability and flexibility. This aspect of morphological computation can also be exploited by mechanical agents or robots that incorporate elastic joints, flexible materials, and a control architecture that models body and brain as an integrated dynamical system. To achieve flexible control, such a system naturally exploits the processing capacities of brain networks (and thus of brain morphology) as well as the material properties of the body and its coupling to the physical world.

As Rolf Pfeifer has argued, intelligence is not only a function of neural processing or, more generally, of a set of clever control algorithms. Rather, intelligence is distributed throughout brain and body. This view has important consequences not only for the efficient design of intelligent machines but also for biological questions such as the evolutionary origin of intelligence. Intelligence depends not only on the architecture of the brain, but on the architecture of brain and body-brain and body evolve together. Embodiment greatly expands the space of possibilities by which evolution can achieve an increased capacity of organisms to process information, by partly offloading computation to the morphology and material properties of the organism's body. Recall that morphological considerations, not of the body but of the brain itself, were a major focus of an earlier chapter. It was noted that the three­dimensional structure of the brain and the spatiotemporal continuity of physical processes occurring during development, from axonal outgrowth to tension-based folding of the brain's surface, play important roles in shaping the organization of structural brain networks. Here, I merely extend this idea to include the rest of the body and its behavior. Evolutionary changes to the development and morphology of an organism's body, for example, the placement or capabilities of its sensory surfaces, the articulation or muscular control of its motor appendages, or its weight or size, necessitate concomitant changes in the nervous system.

Not only is the physical structure of the brain inseparable from that of the body and its sensorimotor repertoire, its dynamics and functional networks are continually modulated by interactions with the environment.” –O. Sporns [27]

 

“I have argued for the considerable power of applying network science and network thinking to neural systems. From the dynamics of social groups to the behavior of single cognitive agents, from the structural and functional connectivity of their neural systems to the morphology and metabolism of individual neurons, and the interactions of their component biomolecules – to modify a popular phrase, it's networks all the way down. Mapping these networks, their extensive sets of elements and interactions, and recording their complex and multiscale dynamics are key steps toward a more complete understanding of how the brain functions as an integrated system, steps toward network neuroscience.” –O. Sporns [27]

 

References and Endnotes:

 

[5] “A Brief Tour of Human Consciousness,” V.S. Ramachandran, Pi Press, 2004.

“The Quest for Consciousness, A Neurobiological Approach” C. Koch, Roberts & Co, 2004.

“In Search of Memory, The Emergence of a New Science of Mind, E.R. Kandel, W.W. Norton & Co., 2007.

“A Universe of Consciousness: How Matter Becomes Imagination,” G.M. Edelman and G. Tononi, Perseus Books, 2000.

“Synaptic Self: How Our Brains Become Who We Are,” J. Ledoux, Penguin, 2003.

“From Axons to Identity: Neurological Explorations of the Nature of the Self,” T.E. Feinberg, Norton & Co., 2009.

 

[21] “Temporal binding and neural correlates of sensory awareness,” A.K. Engle and W. Singer, Trends in Cognitive Sciences, vol. 5, 2001.

 

[22] O. Sporns, G. Tononi, G.M. Edelman, “Connectivity and complexity: the relationship between neuroanatomy and brain dynamics,” Neural Networks, vol. 13, 2000.

 

[23] “A Theory of Cortical Responses,” K. Friston, Phil. Trans. R. Soc. B, vol. 360, 2005.

 

[24] “Hierarchical Models in the Brain,” K. Friston, PLoS Computational Biology, vol. 4, 2008.

 

[25] “Towards a Mathematical Theory of Cortical Micro-circuits,” D. George and J. Hawkins, PLoS Computational Biology, vol. 5, 2009.

 

[26] “Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines  - and How it will Change Our Lives,” Miguel Nicolelis, Henry Holt & Co., 2011.

 

[27] “Networks of the Brain,” Olaf Sporns, MIT Press, 2011.

About | Disclaimer | RSS Feed | | ©2012 EidolonSpeak.com