|Welcome to The Neuromorphic Engineer|
Applications » Hearing
Multiplier-less reconfigurable architectures for spiking neural networks
PDF version | Permalink
Recent advances in the area of reconfigurable hardware (such as field-programmable gate arrays, FPGAs) have made it possible to implement complex systems in comparatively little time. One of the drawbacks of implementing neural systems on reconfigurable platforms is that the neurons take up much more device area than custom hardware. Thus, the number of spiking neurons that can operate on a single device is restricted. To mitigate this, we propose area-efficient multiplier-less reconfigurable architectures for the large-scale implementation of integrate-and-fire and leaky-integrate-and-fire neurons. The rationale behind this investigation is to develop a compact reconfigurable cell in the context of so-called Liquid State Machines1 to model several distributed columns2 and solve problems such as speech recognition.3
Biological neurons process their information through short pulses called spikes. Here, each spike contributes into the membrane potential by building up synaptic strengths. The most biologically plausible models, such as the Hodgkin-Huxley, are not well suited to hardware implementation as they are expensive in terms of the computational resources required. Therefore, other simplified models, such as integrate and fire (IF) or leaky integrate and fire (LIF),4,5 are generally used instead.
In an IF neuron model, stimuli are integrated over time and a spike is generated if it surpasses a certain threshold. Mathematically, this model can be expressed as:
Multiplication is one of the most common and critical operations in many computational systems and is one of the bottlenecks in implementing large-scale spiking neural networks. Low-cost FPGA devices don't have dedicated multiplier units on chip, while high-end FPGA devices have limited numbers of dedicated multipliers: significantly fewer than the number of neurons that can be implemented. Since the total numbers of embedded multipliers available on a particular device are limited, the maximum number of synapses will be restricted to the maximum number of embedded multipliers. In case of large networks, therefore, available logic has to be exploited to implement multipliers, seriously restricting the maximum number of synapses and neurons that can be implemented on a target device.
The proposed approach to the implementation of spiking neurons uses a scheme where multipliers are replaced with spike counters. The weight vector for each synaptic connection can be modified: in this case, where learning is done off-line, they can be considered constant. An accumulator is used in order to accumulate incoming synaptic values for membrane potential. As shown in Figure 1, spikes are produced by the spike generator and the total strength of the synapse is determined by the synaptic weight: once the desired synaptic strength is achieved, an excitatory output pulse is generated (for an excitatory synapse). This is very flexible in that both single and multiple spikes can be calculated. The neuron's membrane is modeled using a simple accumulator and a comparator. An output spike is generated if the potential exceeds a threshold value and the spike is then transmitted to other neurons in the network. This makes it feasible to implement large numbers of relatively simple spiking neurons.6 The proposed model is composed of two sub-units: the synapse and soma membrane. The synapse model is shown at the top of Figure 1 and the soma (neuron) at the bottom. The hardware simulations are shown in Figure 2.
Perfect integrator neuron models such as integrate-and-fire are less biologically plausible than others because they do not include a leakage current that makes the neuron return to its resting potential in the absence of a stimulus. The leakage property is important for learning which exhibits a phenomenon of short-term memory.4,7 In LIF neurons, the membrane potential increases with the excitatory incoming synapses and—in the absence of pre-synaptic spikes—the membrane potential decays and the model becomes a simple homogenous first-order differential equation. The solution to this equation can be expressed as:4. The reconfigurable architectures to emulate the behavior of IF and LIF neuron models are shown in figure 3.
The design was synthesised with the Xilinx ISE design suite, implemented on Virtex II Pro device (xc2vp50), and its resource requirements were calculated. In a linear comparison, almost 2×103 synapses and 1.2×103 neurons can be implemented with the proposed architecture. By using state-of-the-art device such as Virtex-5, which has 331,775 logic cells and 51,840 slices, can fit almost 4.3×103 synapses and 2.5×103 fully parallel neurons. The proposed design has successfully been used for a speech recognition application in the context of Liquid State Machines and the next step is to model multiple cortical columns for multiple input classifications (sensory fusion).
Tell us what to cover!
If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the editor and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.