Brian hears: online auditory processing using vectorisation over channels
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit th...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2011-07-01
|
Series: | Frontiers in Neuroinformatics |
Subjects: | |
Online Access: | http://journal.frontiersin.org/Journal/10.3389/fninf.2011.00009/full |
_version_ | 1818550230804594688 |
---|---|
author | Bertrand eFontaine Bertrand eFontaine Dan F. M Goodman Dan F. M Goodman Victor eBenichoux Victor eBenichoux Romain eBrette Romain eBrette |
author_facet | Bertrand eFontaine Bertrand eFontaine Dan F. M Goodman Dan F. M Goodman Victor eBenichoux Victor eBenichoux Romain eBrette Romain eBrette |
author_sort | Bertrand eFontaine |
collection | DOAJ |
description | The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorising computation over frequency channels, which are implemented in Brian Hears, a library for the spiking neural network simulator package Brian. This approach allows us to use high-level programming languages such as Python, as the cost of interpretation becomes negligible. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelised using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations. |
first_indexed | 2024-12-12T08:43:42Z |
format | Article |
id | doaj.art-cde08390a0424a1a90bbef8b6587eb36 |
institution | Directory Open Access Journal |
issn | 1662-5196 |
language | English |
last_indexed | 2024-12-12T08:43:42Z |
publishDate | 2011-07-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Neuroinformatics |
spelling | doaj.art-cde08390a0424a1a90bbef8b6587eb362022-12-22T00:30:40ZengFrontiers Media S.A.Frontiers in Neuroinformatics1662-51962011-07-01510.3389/fninf.2011.0000911725Brian hears: online auditory processing using vectorisation over channelsBertrand eFontaine0Bertrand eFontaine1Dan F. M Goodman2Dan F. M Goodman3Victor eBenichoux4Victor eBenichoux5Romain eBrette6Romain eBrette7Université Paris DescartesEcole Normale SupérieureUniversité Paris DescartesEcole Normale SupérieureUniversité Paris DescartesEcole Normale SupérieureUniversité Paris DescartesEcole Normale SupérieureThe human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorising computation over frequency channels, which are implemented in Brian Hears, a library for the spiking neural network simulator package Brian. This approach allows us to use high-level programming languages such as Python, as the cost of interpretation becomes negligible. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelised using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.http://journal.frontiersin.org/Journal/10.3389/fninf.2011.00009/fullbriangpupythonauditory filtervectorisation |
spellingShingle | Bertrand eFontaine Bertrand eFontaine Dan F. M Goodman Dan F. M Goodman Victor eBenichoux Victor eBenichoux Romain eBrette Romain eBrette Brian hears: online auditory processing using vectorisation over channels Frontiers in Neuroinformatics brian gpu python auditory filter vectorisation |
title | Brian hears: online auditory processing using vectorisation over channels |
title_full | Brian hears: online auditory processing using vectorisation over channels |
title_fullStr | Brian hears: online auditory processing using vectorisation over channels |
title_full_unstemmed | Brian hears: online auditory processing using vectorisation over channels |
title_short | Brian hears: online auditory processing using vectorisation over channels |
title_sort | brian hears online auditory processing using vectorisation over channels |
topic | brian gpu python auditory filter vectorisation |
url | http://journal.frontiersin.org/Journal/10.3389/fninf.2011.00009/full |
work_keys_str_mv | AT bertrandefontaine brianhearsonlineauditoryprocessingusingvectorisationoverchannels AT bertrandefontaine brianhearsonlineauditoryprocessingusingvectorisationoverchannels AT danfmgoodman brianhearsonlineauditoryprocessingusingvectorisationoverchannels AT danfmgoodman brianhearsonlineauditoryprocessingusingvectorisationoverchannels AT victorebenichoux brianhearsonlineauditoryprocessingusingvectorisationoverchannels AT victorebenichoux brianhearsonlineauditoryprocessingusingvectorisationoverchannels AT romainebrette brianhearsonlineauditoryprocessingusingvectorisationoverchannels AT romainebrette brianhearsonlineauditoryprocessingusingvectorisationoverchannels |