Welcome to The Neuromorphic Engineer
An information-theoretic library for the analysis of neural codes

PDF version | Permalink

R. A. A. Ince, C. Bartolozzi, and S. Panzeri

29 June 2009

PyEntropy may prove an essential tool for the evaluation, comparison, and parametric analysis of neural networks.

Information theory,1,2 the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems, both in vivo and in silico, as stochastic communication channels and gain valuable, quantitative insights into their function. Further, the theory provides a set of fundamental mathematical quantities, such as entropy and mutual information, that quantify with meaningful numbers the reduction of uncertainty about stimuli gained from neural responses. It does this without having to make any specific assumptions about what is signal and what is noise in the neuronal response.

Limited sampling bias: shown is the simulation of an ‘uninformative’ system, the discrete response of which is uniformly distributed, regardless of which of two values of a putative explanatory variable X were presented. Examples of empirical response probability histograms (red solid lines) sampled from 40 and 200 observations (top and bottom row respectively) are shown in the Left and Central columns (responses to X = 1 and X = 2 respectively). The black dotted horizontal line is the true response distribution. The right column shows (as blue histograms) the distribution (over 5000 simulations) of the mutual information values obtained with 40 (top) and 200 (bottom) observations respectively. As the number of observation increases, the limited sampling bias decreases. The dashed green vertical line in the right columns indicates the true value of the mutual information carried by the simulated system (0 bits). Reproduced with permission.3

Information theoretic techniques have been successfully used to address a number of questions about sensory coding. For example, they have been used to address the question of whether neurons convey information by millisecond precision spike timing or simply by the total number of emitted spikes (the spike count).4 The theory has also been used to characterize the functional role of correlations in population activity, by investigating in which conditions correlations play a quantitatively important role in transmitting information about the stimulus.5 These techniques are not only suitable for use with real biological neurons, but are potentially as relevant to the analysis of networks of VLSI neurons. In fact, much like their biological counterparts, silicon recurrent networks of spiking neurons have the ability to infer unknown information from incomplete or ambiguous data:6 in other words, to reduce the uncertainty about the input stimuli presented. Information theory can help to better understand how they achieve this.

Not only can it be used to study neural representations of stimuli, but also how neurons interact with each other and exchange information. For example, information theory can be used to establish causal relationships between neurons or neuronal populations7 or to infer the minmal structure of neural interactions that explains the network behavior: for example by means of maximum entropy principle.8 In the context of silicon networks, these tools can help establishing a stronger link between the circuit architecture and the resulting neural interactions.

A major difficulty when applying techniques involving information theoretic quantities to experimental systems, is that they require measurement of the full probability distributions of the variables involved. If we had an infinite amount of data, we could measure the true stimulus-response probabilities precisely. However, any real experiment (either on biological or on silicon neurons) only yields a finite number of trials from which these probabilities must be estimated. The estimated probabilities are subject to statistical error and necessarily fluctuate around their true values. These finite sampling fluctuations lead to a significant systematic error, called limited sampling bias (Figure 1) in estimates of entropies and information. This bias is the difference between the expected value of the quantity considered, computed from probability distributions estimated with N trials or samples, and its value computed from the true probability distribution. The bias constitutes a significant practical problem, because its magnitude is often of the order of the information values to be evaluated. Also it cannot be alleviated simply by averaging over many neurons with similar characteristics.

Though a number of techniques have been developed to address the issue of limited sampling bias, and allow accurate estimates of information theoretic quantities.9,10 many of these are difficult to implement for non-specialists. We have therefore developed PyEntropy,9 a general Python library for computing entropy and information quantities, to allow use of these techniques by a wider audience. This provides a general and flexible interface to optimized implementations of many of the leading bias corrections and is released under an open source license.11 We hope this will allow wider application of these techniques.

We note that significant effort is currently being devoted to the VLSI implementation of large scale networks of neurons, and to the study of the effect of diverse architectures and plasticity mechanisms on the computational capabilities of such networks.12–14 In this context, we think PyEntropy will prove an essential tool for quantifying the network's ability to reducing uncertainty about stimuli, and hence for the evaluation, comparison, and parametric analysis of the networks under investigation. Thus, we think it will contribute to rapid advances in the understanding of coding strategies in both biological and silicon neural networks, and to lead to a closer comparison of the computational capabilities of the two systems.


R. A. A. Ince
Faculty of Life Sciences, University of Manchester

Robin Ince has masters degrees in mathematics, audio acoustics, and computational neuroscience. He is currently studying for a PhD at the University of Manchester, on the topic of information theoretic analysis of neural data.

C. Bartolozzi
Robotics, Brain, and Cognitive Sciences Department, Italian Institute of Technology

Chiara Bartolozzi is a postdoctoral fellow currently working on the application of neuromorphic engineering approaches to the design of sensors for robotic platforms.

S. Panzeri
Robotics, Brain, and Cognitive Sciences Department, Italian Institute of Technology

Stefano Panzeri is a senior research fellow whose current research focuses on developing quantitative data analysis techniques based on information-theoretic principles and on applying these algorithms to neuronal recordings. His goal is to understand how neuronal populations encode and transmit sensory information.

  1. C. Shannon, A mathematical theory of communication, Bell Syst, Tech. J. 27 (3), pp. 379-423, 1948.

  2. T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd Ed., John Wiley & sons, 2006.

  3. S. Panzeri, C. Magri and L. Carraro, Sampling bias, Scholarpedia 3 (9), pp. 4258, 2008.

  4. J. Victor, Approaches to information-theoretic analysis of neural activity, Biological Theory 1, pp. 302-316, 2006.

  5. G. Pola, A. Thiele, K. Hoffmann and S. Panzeri, An exact method to quantify the information transmitted by different mechanisms of correlational coding, Network Comp. in Neural Sys. 14 (1), pp. 35-60, 2003.

  6. G. Indiveri, E. Chicca and R. Douglas, Artificial cognitive systems: From VLSI networks of spiking neurons to neuromorphic cognition, Cognitive Comp., 2009. (In press.)

  7. T. Schreiber, Measuring Information Transfer, Phys. Rev. Lett. 85 (2), pp. 461-464 Jul, 2000.

  8. E. Schneidman, M. Berry II, R. Segev and W. Bialek, Weak pairwise correlations imply strongly correlated network states in a neural population, Nature 440 (7087), pp. 1007-1012, 2006.

  9. R. A. A. Ince, R. S. Petersen, D. C. Swan and S. Panzeri, Python for information theoretic analysis of neural data, Frontiers in Neuroinformatics 3 (4), 2009.

  10. S. Panzeri, R. Senatore, M. Montemurro and R. Petersen, Correcting for the Sampling Bias Problem in Spike Train Information Measures, J. Neurophysiology 98 (3), pp. 1064-1072, 2007.

  11. http://code.google.com/p/pyentropy/

  12. F. Folowosele, R. Vogelstein and R. Etienne-Cummings, Real-time silicon implementation of V1 in hierarchical visual information processing, Biomedical Circuits and Sys. Conf., 2008. BioCAS 2008. IEEE, pp. 181-184 Nov., 2008.

  13. M. Giulioni, P. Camilleri, V. Dante, D. Badoni, G. Indiveri, J. Braun and P. Del Giudice, A VLSI network of spiking neurons with plastic fully configurable ?stop-learning? synapses, IEEE Int'l Conf. Electronics, Circuits and Sys., ICECS 2008, pp. 678-681, 2008.

  14. E. Chicca, G. Indiveri and R. Douglas, Neural Information Processing Systems Foundation, pp. 257-264, MIT Press, Cambridge, MA Dec, 2007.

DOI:  10.2417/1200906.1663


Tell us what to cover!

If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the editor and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.