Population rate-coding predicts correctly that human sound localization depends on sound intensity
Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The la...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
eLife Sciences Publications Ltd
2019-10-01
|
Series: | eLife |
Subjects: | |
Online Access: | https://elifesciences.org/articles/47027 |
_version_ | 1811201376486686720 |
---|---|
author | Antje Ihlefeld Nima Alamatsaz Robert M Shapley |
author_facet | Antje Ihlefeld Nima Alamatsaz Robert M Shapley |
author_sort | Antje Ihlefeld |
collection | DOAJ |
description | Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline. |
first_indexed | 2024-04-12T02:19:49Z |
format | Article |
id | doaj.art-37b12a78ee4d47d499aa17d5184b6650 |
institution | Directory Open Access Journal |
issn | 2050-084X |
language | English |
last_indexed | 2024-04-12T02:19:49Z |
publishDate | 2019-10-01 |
publisher | eLife Sciences Publications Ltd |
record_format | Article |
series | eLife |
spelling | doaj.art-37b12a78ee4d47d499aa17d5184b66502022-12-22T03:52:08ZengeLife Sciences Publications LtdeLife2050-084X2019-10-01810.7554/eLife.47027Population rate-coding predicts correctly that human sound localization depends on sound intensityAntje Ihlefeld0https://orcid.org/0000-0001-7185-5848Nima Alamatsaz1https://orcid.org/0000-0003-3374-3663Robert M Shapley2New Jersey Institute of Technology, Newark, United StatesNew Jersey Institute of Technology, Newark, United States; Rutgers University, Newark, United StatesNew York University, New York, United StatesHuman sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.https://elifesciences.org/articles/47027interaural time differenceneural codingJeffress modelsound localizationpsychometricshearing |
spellingShingle | Antje Ihlefeld Nima Alamatsaz Robert M Shapley Population rate-coding predicts correctly that human sound localization depends on sound intensity eLife interaural time difference neural coding Jeffress model sound localization psychometrics hearing |
title | Population rate-coding predicts correctly that human sound localization depends on sound intensity |
title_full | Population rate-coding predicts correctly that human sound localization depends on sound intensity |
title_fullStr | Population rate-coding predicts correctly that human sound localization depends on sound intensity |
title_full_unstemmed | Population rate-coding predicts correctly that human sound localization depends on sound intensity |
title_short | Population rate-coding predicts correctly that human sound localization depends on sound intensity |
title_sort | population rate coding predicts correctly that human sound localization depends on sound intensity |
topic | interaural time difference neural coding Jeffress model sound localization psychometrics hearing |
url | https://elifesciences.org/articles/47027 |
work_keys_str_mv | AT antjeihlefeld populationratecodingpredictscorrectlythathumansoundlocalizationdependsonsoundintensity AT nimaalamatsaz populationratecodingpredictscorrectlythathumansoundlocalizationdependsonsoundintensity AT robertmshapley populationratecodingpredictscorrectlythathumansoundlocalizationdependsonsoundintensity |