Welcome to The Neuromorphic Engineer
Technologies

Adaptive, brain-like systems give robots complex behaviors

PDF version | Permalink

Gennady Livitz, Massimiliano Versace, Anatoli Gorchetchnikov, Heather Ames, Jasmin Léveillé, Ben Chandler, Ennio Mingolla, and Zlatko Vasilkoski

14 February 2011

Converging advances in memory, parallel computers, and neural network models will soon allow for systems that can support complicated activities in virtual and robotic agents.

Despite recent advances in computational power and memory capacity, realizing brain functions that allow for perception, cognition, and learning on biological temporal and spatial scales remains out of reach for even the fastest computers. By contrast, these functions are easily achieved by mammalian brains. For example, a rodent placed in a water pool can find its way to a submerged platform using visual cues to self-localize its position and reach a learned safe location. Even a best-case extrapolation for implementing such behavior at a functional level using an artificial brain based on conventional technology would consume several orders of magnitude more power and space than its biological counterpart. Clearly, the computational principles employed by a mammalian brain are radically different from those used by today's computers.

Classical implementations of large-scale neural systems in computers use resources such as central processing unit (CPU) and graphics processing unit (GPU) cores, mass memory storage, and parallelization algorithms. Designs for such systems must cope with power dissipation from data transmission between processing and memory units. By some estimates, this loss is millions of times the power required to actually compute, in the sense of creating meaningful new register contents. Such a high transmission loss is unavoidable as long as memory and computation are physically distant. The creation of an electronic brain stuffed into the volume of a mammalian brain is thus impossible via conventional technology.

The Defense Advanced Research Projects Agency (DARPA)-sponsored Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project is looking for hardware solutions that reduce power consumption by electronic synapses to achieve memory density of 1015 bits per square centimeter. One approach is based on memristive devices. The memristor, initially theorized by University of California, Berkeley Professor Leon Chua1 and later discovered by HP Labs,2 has the unique property of remembering its stimulation history in its resistive state. It does not require power to maintain its memory, making it ideal for implementing dense, low-power synapses supporting large-scale neural models. The challenge is to build a software platform able to exploit the memristor's capacities.

This platform, named Cog ex Machina3 (Cog), is being developed at Hewlett-Packard by Greg Snider. Cog abstracts away the underlying hardware and allocates processing resources by computational algorithms based on CPU/GPU availability. Cog exposes a programming interface that enforces synchronous parallel processing of neural data encoded as multidimensional arrays (tensors).

Our Modular Neural Exploring Traveling Agent (MoNETA) project,4 supported by DARPA/SyNAPSE via a subcontract with HP, uses Cog to progressively implement complex, whole-brain systems able to leverage the power of memristive hardware that is yet to be designed. MoNETA is the brain of an animat, a neuromorphic agent autonomously learning to perform complex behaviors in a virtual environment. It combines visual scene analysis, spatial navigation, and plasticity. The system is intended to replicate a rodent's learning to swim to a submerged platform in the Morris water maze task4 (see Figures 1a, 1b), a behavior that involves cooperation among several brain areas. The MoNETA brain will eventually implement many cortical and subcortical areas that will allow an animat or robot to engage with a virtual or real environment.


a) Morris water maze and b) Modular Neural Exploring Traveling Agent (MoNETA) virtual environment.

We prepared a proof of concept in a robotic platform (iRobot Create) controlled by a simplified Cog-based MoNETA brain (see Figure 2). The robot learned to avoid green and approach red objects based on associated reward values. The robot learned not to revisit objects even if they were attractive. This seemingly simple task involved orientation modeling towards a goal, navigation, object avoidance, sensory processing, motor control, and adaptive learning. It used parallelizable computational threads and tensor data representation results in solutions similar to biological brains, such as layered architecture, parallel processing pathways (for example, what and where pathways), visual-image segmentation, and attentional drive (see Figure 3).


The iRobot Create platform.


Conceptual diagram of the iRobot Brain.

Cog is a scalable, powerful platform for neuromorphic computations that will soon make possible implementation of brain models such as MoNETA comparable in size, power, and behavioral complexity with biological brains.

In the context of the SyNAPSE project, we will continue developing large-scale, multi-system neural models to be executed on high-density, low-power neuromorphic hardware. We will test these models in increasingly complex virtual environments as well as on robots with the target of replicating classic experimental results from the rodent behavioral neuroscience literature.




Authors

Gennady Livitz
Department of Cognitive and Neural Systems, Boston University (BU)

Gennady Livitz, research scientist, earned his PhD in cognitive and neural systems from Boston University, and has a masters degree in electrical engineering. He has a background in software engineering. His research interests include biologically inspired brain models and neuromorphic computations.

Massimiliano Versace
Department of Cognitive and Neural Systems, Boston University (BU)

Massimiliano Versace, senior research scientist, also is director of the Neuromorphics Lab at the National Science Foundation's (NSF's) Center of Excellence for Learning in Education, Science and Technology (CELEST). He is co-principal investigator of the BU subcontract with Hewlett Packard in the Defense Advanced Research Projects Agency (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

Anatoli Gorchetchnikov
Department of Cognitive and Neural Systems, Boston University (BU)

Anatoli Gorchetchnikov, research assistant professor, holds a bachelor's and master's degree in computer science and a PhD in cognitive and neural systems. He also is a project leader for the Modular Neural Exploring Traveling Agent (MoNETA) within DARPA's SyNAPSE program.

Heather Ames
Department of Cognitive and Neural Systems, Boston University (BU)

Heather Ames, research scientist, is also director of technology outreach at the NSF-sponsored CELEST Science of Learning Center and a member of its governing board, which facilitates technology transfer to private industries and labs.

Jasmin Léveillé
Department of Cognitive and Neural Systems, Boston University (BU)

Jasmin Léveillé, postdoctoral associate, focuses on human and computer vision as well as neural architectures for high-performance computing. He received a PhD in cognitive and neural systems from BU.

Ben Chandler
Department of Cognitive and Neural Systems, Boston University (BU)

Ben Chandler, PhD candidate, holds a BS in cognitive science from Carnegie Mellon University. His research interests are large-scale simulation, homeostatic plasticity, and neuromorphic computing.

Ennio Mingolla
Department of Cognitive and Neural Systems, Boston University (BU)

Ennio Mingolla, professor, develops and tests empirical neural network models of visual perception, notably the segmentation, grouping, and contour formation processes of early and middle vision in primates, and looks at the transition of these models to technological applications.

Zlatko Vasilkoski
Harvard Medical School

Zlatko Vasilkoski, research scientist, is a physicist.


References
  1. L. Chua and S. M. Kang, Memristive devices and systems, Proc. IEEE 64 (2), pp. 209-223, 1976.

  2. D. B. Strukov, G. S. Snider, D. R. Stewart and R. S. Williams, The missing memristor found, Nature 453, pp. 80-83, 2008.

  3. G. Snider, R. Amerson, D. Carter, H. Abdalla, S. Qureshi, J. Léveillé and M. Versace, Adaptive computation with memristive memory, IEEE Comp., 2010. (in press)

  4. M. Versace and B. Chandler, The brain of a new machine, IEEE Spectrum 47 (12), pp. 30-37, 2010.


 
DOI:  10.2417/1201101.003500

@NeuroEng



Tell us what to cover!

If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the editor and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.