EidolonSpeak.com ~ Artificial Intelligence

Google | Wikipedia |
About |
Dialogue Space | Letters
subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link
subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link
subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link
subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link
subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link

AI & Artificial Cognitive Systems

small logo

The State of AI, Part 3: Brain Simulations and Neuromorphic Engineering

The State of AI: Brain Simulations and Neuromorphic Engineering

 

The state of artificial intelligence, cognitive systems and consumer AI. Part III

 

Author: Susanne Lomatch

 

There have been many projects over the years seeking to simulate the neural structure of the biological brain (or to a lesser extent, a simpler biological nervous system or neural network) on either a traditional computing platform, or implemented in some form (analog, digital, or mixed analog/digital) on VLSI circuitry with hardware and/or software interfaces. “Computational neuroscience” generally applies to the former, and “neuromorphic engineering” generally applies to the latter. Brain simulation and modeling efforts can be broken into several levels: molecular/cellular scale, neural circuit/network scale, application specific/system scale, generic algorithmic scale, and at the highest level of abstraction, a theoretical scale – see Fig. 1.

(Note to the reader: well-written basic reviews of the human brain/neocortex and of general neuroscience can be found in references [10,15,16] and the references cited therein.)

 

Links to specific reviews in Part 3, located in this document:

      Blue Brain and DARPA SyNAPSE

      Numenta Hierarchal Temporal Memory (and other algorithmic or higher-scale approaches)

      References and Endnotes

 

(Disclaimer: The reviews in this article are meant to inform and entertain, and contain a healthy dose of critical appraisal. I encourage readers who find factual errors, or who have alternative intelligent appraisals and opinions, to contact me (contact link below). I will include any substantial feedback on the dialogue site area dedicated to AI (Click HERE for link to the AI dialogue area). I also welcome constructive and friendly comments, suggestions and dialogue.)

 

Blue Brain and DARPA SyNAPSE

 

Classification: Brain Simulation

 

The Blue Brain project is a relatively recent (2005-present) effort by the Swiss EPFL to simulate the neurobiological structure of the mammalian brain down to the molecular level on a supercomputer, using a neuron/neural network simulator that implements a biologically realistic model for neurons. (IBM supplied a Blue Gene flavor of supercomputer, hence the inclusion of “Blue” in the title.) Blue Brain is unique in that it seeks to simulate the brain architecture at the molecular/cellular level, noting the precise 3D location of all synapses, channels, dendritic branches and various types of neurons. The project’s first accomplishment in 2006 was the simulation of the rat cortical column: “this neuronal network, the size of a pinhead, recurs repeatedly in the cortex; a rat’s brain has about 100,000 columns, with on the order of 10,000 neurons/10^8 synapses each.” (This equates to some 10^9 neurons/10^13 synapses.) The Blue Brain (recently renamed “Human Brain”) project’s goal is to move the simulation toward the level of a human cortex, with roughly 2 million columns, 100,000 neurons each (~10^11 neurons/10^15 synapses) [1].

 

 

IBM Almaden, sponsored in part by the DARPA SyNAPSE project (see below), has developed a massively parallel cortical simulator (“C2”) on a Blue Gene supercomputer of the scale of >10^9 neurons/10^13 synapses using neuron circuit models with both experimentally-measured gray matter thalamocortical connectivity and probabilistic connectivity, “exceeding the scale of a cat cortex [2].” This simulator follows other efforts to study what are called large-scale “spiking neural network (SNN)” models, a paradigm for the neural dynamics in the cerebral cortex. These efforts increase the level of realism in a neural circuit simulation by incorporating timing, neuronal membrane potential thresholds and the encoding/decoding of firing spike trains. (It should be noted that the C2 cortical simulator is significantly more simplified than the simulation efforts of the EPFL/Human Brain efforts, see Fig. 1 and “Cat Fight Brews Over Cat Brain.”)

 

The IBM team has since gone on to map a network diagram of the Macaque monkey brain, gaining “unprecedented insight into how information travels and is processed across the brain” and the identification of a “tightly integrated core that might be at the heart of higher cognition and even consciousness…and may be a key to the age-old question of how the mind arises from the brain [3].”

 

(Reportedly [4], the Allen Institute for Brain Science is leading a significant effort to model and map the human brain, specifically a “high through-put, large scale cortical coding project” that may eclipse the efforts of EPFL and IBM. Readers may find the Allen human brain atlas to be a useful tool for visualization and data mining. Christof Koch’s comments at the end of [4] are quite sobering in terms of what might be realistically accomplished for precision, real-time imaging of the human neural architecture on the individual neuron level. The Human Connectome project is a dual stepping stone toward these efforts, seeking to map the connectome starting at the functional-regional level while coincidently developing novel imaging techniques that may provide increased spatiotemporal dynamic resolution (likely the more effective imaging techniques will be invasive). The Open Connectome project already provides open data sources from research efforts that have mapped partial high spatial resolution connectomes of the primary visual cortex of the mouse, the complete connectome of C. elegans, and some partial maps of human functional regions.)

 

The DARPA Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program aims to “develop electronic neuromorphic machine technology that scales to biological levels…coordinating aggressive technology development activities in the following areas: hardware, architecture, simulation, and environment.” In its initial phases, IBM was awarded as a lead contractor to perform simulations, mapping studies and to develop neurosynaptic computing chips. In August 2011, IBM reported the performance of two prototype designs that include an all-digitalneurosynaptic core” of integrated memory (replicated synapses), computation (replicated neurons) and communication (replicated axons). The first prototype contained 256 neurons/262,144 programmable synapses and the second prototype 256 neurons/65,536 learning synapses, with a claim of successfully demonstrating neural net apps with navigation, machine vision, pattern recognition, associative memory and classification capability. Processing speed of the chips is around 10 Hz [5,6].

 

Though the DARPA program sought to demonstrate the first all-digital neuromorphic-neurosynaptic chip, they really follow earlier notable analog and mixed analog/digital implementations in the U.S. and Japan. The Stanford Neurogrid project produced a hardware board containing sixteen “neurocores,” each with 256x256 silicon neurons on a ~0.5” square chip; an off-chip RAM (at the tree's root) and an on-chip RAM (in each neurocore) softwire horizontal and vertical cortical connections, respectively. The board simulates one million neurons and six billion synaptic connections in real time, consuming less than 2W of power [7] – “making IBM’s Blue Gene performance affordable on a Dell-cluster budget.”

 

The prominent differences between the Neurogrid-neurocore chip and the SyNAPSE neurosynaptic core chip are evolutionary: first, going from a mixed analog/digital implementation to an all-digital one that avoids the use of a bulky capacitor to implement neuronal voltage profiles and second, an embedded crossbar array, which allows for synaptic fanout without resorting to off-chip memory that can create an I–O bottleneck. By bypassing this critical bottleneck, the IBM team claimsit is now possible to build a large on-chip network of neurosynaptic cores, creating an ultra-low power neural fabric that can support a wide array of real-time applications that are one-to-one with software [5].” (Actually, it turns out that a few of the guys that developed Neurogrid at Stanford now work for IBM and are lead investigators of one of the SyNAPSE chip implementations.)

 

The IBM-led SyNAPSE team has been awarded Phase 2 of the project, with the goal to “create a system that not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment – all while rivaling the brain’s compact size and low power usage.” From other sources: “the follow-on phases of the project will create a technology that functions like the brain of a cat, which comprises 10^8 neurons and 10^12 synapses." A rough-scale graphic of the program goal of SyNAPSE can be found HERE. SyNAPSE aims to also develop adjunct technologies that might be integrated into the final program demo, namely high-neuronal density laminar circuits that mimic layered cortical sheets, high-speed busses, and high integration density memristors as synaptic elements (“MoNETA”).

 

Let me inject at this point that though the accomplishments of Blue Brain and SyNAPSE are laudable, I think the goals are ill conceived, in that I seriously doubt that human-scale brain intelligence and cognitive behavior (whole human brain simulations) can be achieved either through cellular or network-level simulations on von Neumann machines (conventional supercomputers), or through direct hardware implementations in silicon/CMOS or conventional semiconducting technology. Some simple scaling arguments support this assertion.

 

A. Scaling problems with respect to simulations on von Neumann-type supercomputers

The biological human brain performs at an enormous efficiency, an estimated 500-50,000 x 10^6 MegaFLOPS/Watt @ 20 Watts total consumption in ~2 Liter volume of space (this equates to an equivalent computing power of 10-1000 PetaFLOPS*). By contrast, IBM’s Watson performs its specialized expert task of beating human Jeopardy! competitors at a huge deficiency: some 400 MegaFLOPS/Watt @ 200 KiloWatts consumption in an air-conditioned room (80 TeraFLOPS). The world’s currently fastest supercomputer, the Fujitsu Kei, operates at ~825 MegaFLOPS/Watt @ ~10 MegaWatts of power (~10,000 suburban homes), requiring a specialized water cooling system to minimize failure rate and power consumption, and likely fills a larger room than Watson. Though its current computing speed of ~10.5 PetaFLOPS approaches the estimated equivalent computing power of the human brain, this this does not mean that a realistic simulation of the human brain including some 10^11 neurons (massive parallelism) and 10^15 synapses (massive connectivity) can be accomplished on it, especially not in real time. If I assume that a scale approaching a realistic simulation runs a highly optimistic ~2-3 orders slower than real time, then a 10+ ExaFLOPS supercomputer is required. Obviously, computational details (temporal/spatial) of the particular simulation are necessary for exact scaling, but none of the simulations to date approach anywhere near the level of complexity and run-time efficiency that would be required.

 

(*This is estimated based on the simplistic assumption that one neural spike represents 10^6 floating point per second (MegaFLOPS) of throughput, aggregating millions of “computations” that are made at the molecular/cellular level, and that the distributed parallel architecture of at least 10^11 neurons and 10^15 synapses leads to a combined throughput of 10-1000 PetaFLOPS. In reality, an actual neural spike is not equivalent to the floating point arithmetic in conventional computing architectures. I will discuss elsewhere in another paper a better metric for describing throughput or capacity, in particular, capacity based on the number of selective states available for any given process, where “states” are defined according to neuronal groups participating in a dynamic process. There are numerous other metrics that one can define that are more realistic than the conventional FLOPS assumption. For now, this assumption was convenient to make a rough in-kind comparison.)

 

B. Scaling problems with respect to direct hardware implementations

The issues with CMOS VLSI implementations of high-density neuromorphic systems are the long-known limitations due to power dissipation and integration density. Though limited systems can be fabricated and achieve a certain level of intelligent task performance, something approaching that of even a rat or cat cortex is a stretch.

 

Let me frame some more scaling arguments to support this. The recent neuromorphic all-digital chips from the IBM group described above were fabricated with a 45nm SOI-CMOS process, producing 2.56 x 10^2 neurons/2.62 x 10^5 synapses in a 4.2mm^2 footprint, consuming 45 pJ/spike in active power. Hardware simulating a rat cortex density/connectivity would require at least a 10^3 scale-up per footprint, 100,000 footprints (with analogy to the basic unit, the cortical column, as described above). This would imply a contiguously active area of 4.2 x 10^5 mm^2 (~4.52 ft^2), assuming such 10^3 core scaling per original footprint of 4.2 mm^2 is possible.

 

My objection here is multi-fold. First, the scaling assumes that the process can be reduced without much penalty by at least a factor of 30, to 1.5nm feature size, increasing the active device density by the ~10^3 quoted. The technological ability to achieve this is unsubstantiated in SOI-CMOS, since as feature size is reduced to 5nm and below, direct quantum mechanical tunneling from the source to the drain across the transistor gate degrades transistor performance. Device engineering would have to work around this problem. Power requirements also become an issue, since energy efficiency of the devices does not scale with integration capacity. Studies have shown that regardless of chip organization and topology, multicore scaling is power limited: at 22nm, 21% of a fixed size chip must be powered off, and at 8nm, designers are forced toward 50% “dark silicon [8].” Second, as wire width is reduced to 1.5nm, quantum effects that increase resistivity may come into play (though recent studies remarkably show that single atoms of phosphorous on silicon assembled at the 1.5nm feature size show no appreciable increase of resistivity, this may be highly material dependent and not applicable to the actual device engineered materials used in a scaled down SOI-CMOS process). An increase of resistivity translates into poorer heat dissipation and a severe impedance of device performance. Third, interconnects in an actual rat cortex are hierarchically dense and complex – how on earth will this be achieved in a 2D or even pseudo-3D integrated circuit architecture requiring the contiguous area of something close to 2ft x 2ft? Biologically, the rat neocortex is the size of a postage stamp, approximately 6 layers (~2mm) thick. Fourth, fault tolerance is an underlying issue with traditional semiconducting VLSI designs, yet not an issue with the biological brain. Electronics designers must go out of their way to add this feature, perhaps increasing device complexity beyond what is practical to implement (the IBM chip implementation does not address fault tolerance).

 

My conclusion: semiconducting electronics are not well suited to implement the density, connectivity and fault tolerance of even a simple mammalian neocortical architecture. AI neuromorphic designers must start to think about alternate approaches of implementation.

 

In the 90s, I was a researcher in the field of superconducting electronics, and wrote a few articles on using such electronics to implement highly dense and interconnected neural networks, such as those that make up the neocortex. Superconducting electronics offers three solutions to the semiconducting electronics problems outlined above. First, resistivity and heat dissipation are not an issue at all. Superconductors have zero resistivity, and therefore power dissipation issues do not scale inversely to feature size. The relevant metric is the amount of power required to switch an active device, a Josephson junction, < 10^-18 J/bit. Superconducting microstrip lines, the equivalent of thin wires, allow ballistic transfer of picosecond waveforms over arbitrary intrachip distances with negligible attenuation and dispersion, low crosstalk, and speed approaching the speed of light. The basic core of the technology, a rapid single flux quantum (RSFQ) device, is inherently fault tolerant, and displays properties remarkably similar to neural-synaptic circuits: it represents both a storage of memory and a point of processing or switching [9]. However, the problems with superconducting electronics are many: cooling requirements to 4-5şK for standard Nb materials and processes, and a general lack of an evolutionary industrial scaling process due to a dearth of interest and funding.

 

So what next? Neuromorphic engineers: speak up!

 

The underlying problem is that the human brain does not “compute” in any way, shape or form like a computer does, even the rationalized intelligence features that might map to traditional computing intelligence. The possibility of implementing the human brain in silicon architectures has been sold as a fait accompli, when in fact there are simple scaling arguments that show that a different approach is required, that addresses integration density, heat dissipation, redundancy, plasticity, etc.

 

Though novel approaches such as memristive nanodevices (see [17] and a review HERE) may provide a solution for synaptic plasticity and possibly integration density, i.e. reconfigurable “cortical” computing, this paradigm suffers from a lack of a viable integrated processing unit, a neuronal equivalent (neuron arrays are fabricated in conventional CMOS for the models shown in [17]), and heat dissipation and ultra integration are still unresolved issues. This approach also does not address neuronal plasticity, which might allow for a fully reconfigurable network. A very recent demonstration of protein-based memristive nanodevices [18] may motivate a biologically inspired shift for this approach, but we are still left with the question of an integrated encoding and processing unit.

 

I argue that the real value of neuromorphic engineering (particularly silicon or semiconductor-based implementations) will be their application specific (as in “ASICs,” or reconfigurable architectures such as FPGAs) to robotics and bioengineered functions such as brain-machine interfaces (BMIs) and neuroprosthetics, where their scale is more appropriately matched in terms of density and power requirements (see the middle layer in Fig. 1). I review BMIs in Part 4 of this series.

 

The prospect of not being able to reach or rival human intelligence and cognition via molecular/cellular or neural circuit/network scale implementations on known computing/hardware technology is a motivator for two alternative approaches: (a) biologically-inspired architectures and (b) formulating a higher-level algorithmic paradigm that may capture the salient features of intelligence and cognition, which when suitably implemented can rival human intelligence/cognition. I turn to the latter endeavor next.

 

Numenta Hierarchal Temporal Memory (and other higher-scale approaches)

 

Classification: Brain Simulation

 

A few years ago, when I first took an interest in AI, I read a book, “On Intelligence [10],” by Jeff Hawkins, one of the original co-founders of Palm and Handspring, creator of the PalmPilot and Treo, a leading neurocomputing/neuroscience researcher, the founder of the neurocomputing company Numenta, and a co-founder of the Redwood Center for Theoretical Neuroscience. Hawkins in his book motivated a human neurobiological shift in the AI challenge to replicate human intelligence, inventing the “memory-prediction framework (MPF).” (For a quick review, click on the previous link to the Wikipedia page for this topic.)

 

As Hawkins describes [10], the human cortex with its massive memory capacity is constantly predicting what a human senses. “These predictions are our thoughts, and, when combined with sensory input, are our perceptions.” In this framework, temporal pattern recognition by the neocortex via auto-associative recall leads to predictive human intelligence: the dense hierarchical interconnected structure of the cortex classifies inputs, learns sequences of patterns, forms a constant pattern or an “invariant representation” for a sequence, and makes specific predictions. An example of how the neocortex makes predictions is that it takes feedforward information from sensory areas (actual inputs) and combines that with feedback information stored in memory (predictions in an invariant form) – Hawkins describes this process in the context of the firing of specific columnar cells in cortical layers, with layer 6 representing the bottom layer, and layer 1 the top (the six known layers of a human cortex). Layer 6 cells receive active inputs, and synaptic connections between layers extending from layer 1-6 intersect, providing the correlate with an invariant representation stored in layer 1, which gets converted to a specific representation (prediction) via hierarchical processing through the layers. The specific prediction can be fed back into layer 1 cells (for easier recall of a learned sequence) or it can get fed back into inter-layer, inter-columnar cells (“folded feedback”) as a predictive input representing daydreaming, thinking, imagining, or planning, as opposed to actual observation or perception through the senses. Hawkins outlines several testable proposals including novel learning mechanisms, and how his learning models map to the cortical-thalamic architecture and subregions (‘microcircuits’) of cortical columns.

 

Hawkins’ work on MPF was motivated by Vernon Mountcastle’s theory [11] that functional regions of the neocortex are performing the same basic operation, using the same computational tool to accomplish everything it does. (Mountcastle based this proposition on the fact that the neocortex is remarkably uniform in appearance and structure, that regions of cortex that handle auditory input look like the regions that handle touch, which look like the regions that control muscles, which look like Broca's language area, which look like practically every other region of the cortex.)

 

With other collaborators, Hawkins has developed the MPF into a more rigorous mathematical and algorithmic model “Hierarchical Temporal Memory (HTM),” incorporating Bayesian inference principles, a method of statistical inference used to calculate how the degree of belief in a proposition (i.e., the probability of a prediction) changes due to evidence. Neuroscience researchers studying the neocortical architecture have proposed hierarchical Bayesian dynamic models to explain cortical processing, particularly in the visual cortex. Hawkins’ HTM provides an algorithm for implementing this processing [12]. The direct applications are artificial vision systems and speech recognition, with vision being a challenge unconquered using more traditional AI approaches. A key problem with traditional computer vision systems is the ability of such systems to register proper recognition of an image when the image rapidly changes or when the image is only partial in the visual field. The biological brain makes up for these deficiencies by using saccades and vision processing that involves comparisons with invariant representations in associative memory to make predictions about the image.

 

 

HTM is useful for what is called “deep machine learning” (or simply deep learning), which employs many layers of processing using time-dependent data inputs, providing a more robust detection and possibly more efficient training and learning outcome. Conventional machine learning is generally shallower, and algorithmically limited in handling complex data inputs (multivariate, time dependent). DARPA has taken an interest: the DARPA Deep Learning program seeks to exploit advanced machine learning algorithms (such as those based on HTM) for military applications, including applying the same set of algorithms to interpret and recognize patterns in the barrage of data from multiple sensors and inputs (e.g., video, audio, radar, seismic, infrared, text, speech). Numenta’s commercial applications for HTM include credit card fraud detection, and predicting web user patterns or the likelihood that a hospital patient will suffer a relapse [13]; in general, for many complex data mining applications.

 

Numenta’s latest HTM cortical learning algorithms are described in detail HERE, and Numenta offers free access to legacy software based on prototype HTM algorithms HERE. Numenta has not made available computational complexity and requirements for its latest algorithms; in particular, it is useful to know how tractable an algorithmic application will be as the application scales in complexity (in terms of data inputs, data storage, hierarchical density-connectivity, etc.).

 

Hawkins’ work does not come without constructive criticism. One commenter [13] characterized the application of HTM as “another narrow domain AI…hardly qualifying as the next step in ‘Hawkins stands ready to revolutionize both neuroscience and computing in one stroke,’ as the inside cover of ‘On Intelligence’ claims…I doubt that this does anything to advance the purpose of attaining a self-sufficient generalized AI.

 

My own view is that the HTM model and algorithms may be useful in explaining how the biological brain learns, but it does not rule out the possibility that there may be other learning modes that follow a different paradigm, in addition to that offered by HTM. Neurobiological experimental data can and should be used to validate these models; Hawkins offered some concrete proposals/predictions at the end of his book [10], which have been largely left unattended. One problem is that neurological imaging techniques are spatiotemporally limited (functional magnetic resonance imaging or fMRI can be used to pinpoint spatial phenomena, but is poor at discerning rapid changes that might correspond to neural firing patterns), and experimenting on live subjects with invasive techniques (which will be likely required for greater spatiotemporal detail) has ethical limits. I review some of the work that has been accomplished using noninvasive and invasive imaging and BMI devices in Part 4.

 

Hawkins’ work does seem to miss one key aspect aside from being a generalizable learning paradigm: it focuses on auto-associative memory and recall, or retrieving a piece of data upon presentation of only partial information from that piece of data. The brain may also employ what is called hetero-associative memory and recall, which allows for recall an associated piece of datum from one category upon presentation of data from another category. This might occur structurally through overlapping cell assemblies in the brain’s highly distributed and nested structure of functional connections. This is a key insight, as higher-order cognitive capability such as language and abstract thought exhibit such “complex” associations. I don’t see much discussion of this aspect, and it is one I intend to focus on going forward, in my own endeavors to provide clarity and understanding.

 

Another aspect I’d like to raise is that although learning in the biological brain might follow HTM and similar hierarchical statistical models, human thought and reasoning do not. As Daniel Kahneman pointed out in his recent book “Thinking, Fast and Slow [14]:” “Our mind is strongly biased toward causal explanations and does not deal well with “mere statistics”. When our attention is called to an event, associative memory will look for its cause – more precisely, activation will automatically spread to any cause that is already stored in memory. Causal explanations will be evoked when regression is detected, but they will be wrong because the truth is that regression to the mean has an explanation but does not have a cause.” In short, Kahneman’s work shows that “Humans are not well described by the rational agent model,” and Bayesian belief propagation models are certainly rational agent models. Von Neumann would be proud of these models, as they describe a rational agent (whom some have coined an “Econ”) that makes risky decisions on gambles using a rationalized set of axioms (“expected utility hypothesis”). Kahneman and others have shown that Humans exhibit systematic violations of the axioms of rationality in making risky choices between gambles. Now this may be a shortcoming of the biological (namely human) brain, and I could argue that if we are to build a “self-sufficient generalized AI” or in my terms, a “Companion AI,” that we might want to improve upon these seeming flaws, forcing the Companion AI to be a rational agent. But we still don’t know why humans exhibit these systematic biases in thinking from a neurological perspective – is it a limitation of neocortical capacity (which might point to a limitation of memory and processing, or neural-synaptic capacity and connectivity), or is it an inherent, integral display of the architecture of the neocortex-old brain that has an evolutionary purpose, or both? Are there salient advantages to these modes of thought, as opposed to flaws, and if so, can we identify neurological processing models to explain them? Could a Companion AI that is modeled on a strict rational agent-Econ ever make major scientific or technological discoveries, imagine and create great art, music or literature, or entertain with dramatic flair or comic wit?

 

Cognitive scientists (namely psychologists studying human behavior from a more rigorous, mathematical perspective) have used Bayesian inference models to interpret complex behavioral data with respect to human memory recall, recognition, and other modes of cognition, but as some in this field have even pointed out: is Bayesian inference being used “just for data analysis” or is it integral itself to a model of human cognition? Bayesian inference is used for many types of data analysis applications, including signal detection theory and independent component analysis. Neuroscientists have used SDT-ICA on fMRI data to separate neuronal-synaptic activation, physiological, and other signals, but temporal information is still limited, and ICA assumes statistical independence of signal sources. Using Bayesian inference and SDT-ICA to resolve a “theory of the whole brain” is interesting, but we can and must look toward multiple analysis, imaging and detection tools (some as yet undiscovered) to understand the brain and to formulate all-encompassing or complete models or theories, and to determine indeed to what extent the brain itself uses such inferencing.

 

On that note, there are many theories/models aimed at explaining cognitive processing in the brain, with scopes that apply to various levels in Fig. 1. At the neural circuit/network level, there is a push toward “neurocognitive networks,” biologically inspired models incorporating the large-scale structure and dynamics of the brain, particularly the processing in the thalamocortical and corticocortical system. Some of the efforts within this thrust can be generalized to more algorithmic approaches that would essentially be similar to the HTM model discussed above, or more complex generalizations of it (see, e.g., [19]). Models that are even more abstract would of course move to the top of the scale in Fig. 1. Incorporating increasing experimental/phenomenological detail into large-scale models would move the scale/scope toward the bottom of Fig. 1, the molecular/cellular (biophysical) level (see, e.g., [20]). For some insights into properties and unconventional observations that would impact models for cognitive systems, see key considerations for artificial cognitive systems (representations and architectures). As mentioned in Part 1 of this series, Gerald Edelman and his colleagues/students [21,22] have intriguingly investigated the role of selectional theories in developing models of the brain (e.g. neural Darwinism), defining testable predictions that would have an impact on the development of artificial cognitive systems, and form a range of complex generalizations above that of simpler memory prediction frameworks or of Hawkins’ HTM model, which is based more closely on unimodal processing in the visual cortex. More recently, the mapping efforts of Sporns et al. (see [22] and refs. therein) are on a productive track to produce the data and resolution necessary to test various theories and models.

 

References and Endnotes:

 

[1] Henry Markram on simulating the brain — the next decisive years,” International Supercomputing Conference (video), July 2011. Excellent, highly recommended video.

[2] “The Cat is Out of the Bag: Cortical Simulations with 10^9 Neurons, 10^13 Synapses,” R. Ananthanarayanan, et al., Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis, 2009.

[3] “Network architecture of the long-distance pathways in the macaque brain,” D. Modha et al, PNAS, June 2010.

[4] “Allen Institute aims to crack neural code,” Nature, March 2011.

[5] “A Digital Neurosynaptic Core Using Embedded Crossbar Memory with 45pJ per Spike in 45nm,” P. Merolla, et al. IEEE Custom Integrated Circuits Conference, Sept. 2011.

[6] “A 45nm CMOS Neuromorphic Chip with a Scalable Architecture for Learning in Networks of Spiking Neurons,” J. Seo, et al., IEEE Custom Integrated Circuits Conference, Sept. 2011.

[7] “Brain-Like Chip May Solve Computers' Big Problem: Energy,” Discover Magazine, Oct. 2009.

[8] “Dark Silicon and the End of Multicore Scaling,” H. Esmaeilzade et al., Proceedings of the 38th International Symposium on Computer Architecture (ISCA ’11).

[9] A Multilayered Superconducting Neural Network Implementation,” E. Rippert and S. Lomatch, IEEE Transactions on Applied Superconductivity, vol. 7 (2), p. 3442, Jun. 1997.

[10] “On Intelligence,” Jeff Hawkins, Owl Books, 2004.

[11] “An Organizing Principle for Cerebral Function: The Unit Model and the Distributed System,” V.B. Mountcastle, in “The Mindful Brain, ed. by G.M. Edelman and V.B. Mountcastle, MIT Press, 1978. See also “Perceptual Neuroscience,” V.B. Mountcastle, Harvard Univ. Press, 1998.

[12] “Towards a Mathematical Theory of Cortical Micro-circuits,” D. George and J. Hawkins, PLoS Computational Biology vol. 5 (10), Oct. 2009. See also references therein, which offer a rich background “neurocomputing.”

[13] “The Brainy Learning Algorithms of Numenta,” MIT Technology Review, Dec. 17, 2010.

[14] “Thinking, Fast and Slow,” Daniel Kahneman, Farrar, Straus and Giroux, 2011. (Humans are not rational agents; in particular, in solving elementary problems that involve uncertain reasoning: See list of cognitive biases for several examples.)

[15] “The Quest for Consciousness: A Neurobiological Approach,” Christof Koch, Roberts & Co. Publishers, 2004.

[16] “The Scientific American Book of the Brain,” ed. Antonio R. Damasio, 1999.

[17] “Cortical Computing with Memristive Nanodevices,” G.S. Snider, SciDac Review, Winter 2008.

[18] “Protein-Based Memristive Nanodevices,” F. Meng et al., Small, vol. 7 (21), Sept. 2011. See also “First demonstration of a memristive nanodevice based on protein

[19] “The Free-Energy Principle: A Unified Brain Theory?,” K. Friston, Nature Reviews Neuroscience, vol. 11, 2010.

[20] “Large-Scale Model of Mammalian Thalamocortical Systems,” E.M. Izhikevich and G.M. Edelman, Proc. Natl. Acad. Sci., vol. 105, 2008.

[21] “A Universe of Consciousness: How Matter Becomes Imagination,” G.M. Edelman and G. Tononi, Perseus Books, 2000.

[22] “Networks of the Brain,” Olaf Sporns, MIT Press, 2011.

About | Disclaimer | RSS Feed | | ©2012 EidolonSpeak.com