|Welcome to The Neuromorphic Engineer|
|An information-theoretic library for the analysis of neural codes|
PDF version | Permalink
Information theory,1,2 the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems, both in vivo and in silico, as stochastic communication channels and gain valuable, quantitative insights into their function. Further, the theory provides a set of fundamental mathematical quantities, such as entropy and mutual information, that quantify with meaningful numbers the reduction of uncertainty about stimuli gained from neural responses. It does this without having to make any specific assumptions about what is signal and what is noise in the neuronal response.
Information theoretic techniques have been successfully used to address a number of questions about sensory coding. For example, they have been used to address the question of whether neurons convey information by millisecond precision spike timing or simply by the total number of emitted spikes (the spike count).4 The theory has also been used to characterize the functional role of correlations in population activity, by investigating in which conditions correlations play a quantitatively important role in transmitting information about the stimulus.5 These techniques are not only suitable for use with real biological neurons, but are potentially as relevant to the analysis of networks of VLSI neurons. In fact, much like their biological counterparts, silicon recurrent networks of spiking neurons have the ability to infer unknown information from incomplete or ambiguous data:6 in other words, to reduce the uncertainty about the input stimuli presented. Information theory can help to better understand how they achieve this.
Not only can it be used to study neural representations of stimuli, but also how neurons interact with each other and exchange information. For example, information theory can be used to establish causal relationships between neurons or neuronal populations7 or to infer the minmal structure of neural interactions that explains the network behavior: for example by means of maximum entropy principle.8 In the context of silicon networks, these tools can help establishing a stronger link between the circuit architecture and the resulting neural interactions.
A major difficulty when applying techniques involving information theoretic quantities to experimental systems, is that they require measurement of the full probability distributions of the variables involved. If we had an infinite amount of data, we could measure the true stimulus-response probabilities precisely. However, any real experiment (either on biological or on silicon neurons) only yields a finite number of trials from which these probabilities must be estimated. The estimated probabilities are subject to statistical error and necessarily fluctuate around their true values. These finite sampling fluctuations lead to a significant systematic error, called limited sampling bias (Figure 1) in estimates of entropies and information. This bias is the difference between the expected value of the quantity considered, computed from probability distributions estimated with N trials or samples, and its value computed from the true probability distribution. The bias constitutes a significant practical problem, because its magnitude is often of the order of the information values to be evaluated. Also it cannot be alleviated simply by averaging over many neurons with similar characteristics.
Though a number of techniques have been developed to address the issue of limited sampling bias, and allow accurate estimates of information theoretic quantities.9,10 many of these are difficult to implement for non-specialists. We have therefore developed PyEntropy,9 a general Python library for computing entropy and information quantities, to allow use of these techniques by a wider audience. This provides a general and flexible interface to optimized implementations of many of the leading bias corrections and is released under an open source license.11 We hope this will allow wider application of these techniques.
We note that significant effort is currently being devoted to the VLSI implementation of large scale networks of neurons, and to the study of the effect of diverse architectures and plasticity mechanisms on the computational capabilities of such networks.12–14 In this context, we think PyEntropy will prove an essential tool for quantifying the network's ability to reducing uncertainty about stimuli, and hence for the evaluation, comparison, and parametric analysis of the networks under investigation. Thus, we think it will contribute to rapid advances in the understanding of coding strategies in both biological and silicon neural networks, and to lead to a closer comparison of the computational capabilities of the two systems.
Tell us what to cover!
If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the editor and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.