Neuromorphic auditory sensor applications using SpiNNaker

Auditory environment analysis could be useful for robot navigation helping. Doppler effect give cues about how objects are moving in a scenario. In this workgroup we propose to use a spike-based Neuromorphic Auditory Sensor connected to a 4-chip SpiNNaker board for determining if an object is moving to or from the robot and/or other interesting information like position or speed of the source object.
We propose to try this using:
-Neuromorphic Auditory Sensor (NAS).
-Neuromorphic Auditory Visualizer Tool (NAVIS).
-Parallel AER to Spinnaker adapter.
-SpiNNaker Board.
We have brought some NASs and tools that we could lend you for these two weeks if you want to work with them! Just contact us and we will be pleased to give you more information.

Login to become a member send


No timetable published yet.

Neuromorphic Auditory Sensor (NAS) [1] is an audio sensor for FPGAs inspired in the Lyon's model of the biological cochlea [2] This sensor is able to process an excitatory audio signal using Spike Signal Proccesing (SSP) techniques [3], decomposing incoming audio in its frequency components, and providing this information as a stream of events using the Address-Event Representation (AER)[4]. Current state-of-the-art of silicon cochleae process audio in an analog way [5], using a bank of low-pass filters (modeling the basilar membrane), and convert the filters' output to spikes (modeling the inner hair cells). However, NAS works in the opposite way: first, it converts the incoming audio to spikes, and directly processes these spikes using a Spike Low-pass Filter (SLPF) bank with a cascade topology. Due to the use of SSP filters, circuits are very simple and do not need complex operating units or dedicated resources (e.g. floating point ALUs, hardware multipliers, RAM memory, etc...). As a consequence, NAS designers are able to replicate SLPFs in low-cost FPGAs, building large scale NAS with a low-clock frequency working fully in parallel. 

We have several NAS’ configurations already generated (64 channels mono, 64 channels stereo, 128 channels stereo, etc.). For this work, we propose to use a 128-channels stereo NAS.

We also have a couple of USBAERmini2 boards [6] to monitor the NAS output information in the PC with jAER [7] and store the NAS response as AEDAT files, which could be later analyzed using NAVIS [8]. NAVIS is a desktop software application that presents diverse utilities to develop the first post-processing layer using the neuromorphic auditory sensors (NAS) information. NAVIS implements a set of graphs that allows to represent the auditory information as cochleograms, histograms, sonograms, etc. It can also split the auditory information into different sets depending on the activity level of the spike streams. The main contribution of this software tool is its capability to apply complex audio post-processing treatments and representations, which is a novelty for spike-based systems in the neuromorphic community.

We have also developed a NAS-to-SpiNNaker PCB. This allows to receive the output information of the NAS directly into a SpiNNaker neuron population [9], so that the spiking information can be processed in real time.

In this work we will focus on using these tools to estimate the position, direction and/or the speed of a moving object (which will be also the sound source, i.e. a car horn) based on the Doppler effect and NAS’ output information with a Spiking Neural Network (SNN) which will be deployed on the SpiNNaker board. We still need to discuss the SNN configuration and topology, but we already have a couple of ideas.

[1] Jiménez-Fernández, A., Cerezuela-Escudero, E., Miró-Amarante, L., Domínguez-Morales, M. J., de Asís Gómez-Rodríguez, F., Linares-Barranco, A., & Jiménez-Moreno, G. (2017). A binaural neuromorphic auditory sensor for FPGA: a spike signal processing approach. IEEE Transactions on Neural Networks and Learning Systems, 28(4), 804-818.

[2] Lyon, R. F., & Mead, C. (1988). An analog electronic cochlea. IEEE Transactions on Acoustics, Speech, and Signal Processing, 36(7), 1119-1134.

[3] Jimenez-Fernandez, A., Linares-Barranco, A., Paz-Vicente, R., Jiménez, G., & Civit, A. (2010, July). Building blocks for spikes signals processing. In Neural Networks (IJCNN), The 2010 International Joint Conference on (pp. 1-8). IEEE.

[4] The Address Event Representation Communication Protocol.

[5] Yang, M., Chien, C. H., Delbruck, T., & Liu, S. C. (2016). A 0.5 V 55$\ mu\ text {W} $64$\ times $2 Channel Binaural Silicon Cochlea for Event-Driven Stereo-Audio Sensing. IEEE Journal of Solid-State Circuits, 51(11), 2554-2569.

[6] Berner, R., Delbruck, T., Civit-Balcells, A., & Linares-Barranco, A. (2007, May). A 5 Meps $100 USB2. 0 address-event monitor-sequencer interface. In Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on (pp. 2451-2454). IEEE.

[7] jAER Open Source Project,

[8] Dominguez-Morales, J. P., Jimenez-Fernandez, A., Dominguez-Morales, M., & Jimenez-Moreno, G. (2017). NAVIS: Neuromorphic Auditory VISualizer Tool. Neurocomputing, 237, 418-422.

[9] SpiNNaker AppNote 8: Interfacing AER devices to SpiNNaer using an FPGA.



Juan Pedro Dominguez-Morales
Daniel Gutierrez-Galan


James Knight
Fernando Perez-Peña
Sahana Prasanna