Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing

Purpose: Improved speech recognition in binaurally combined acoustic–electric stimulation (otherwise known as bimodal hearing) could arise when listeners integrate speech cues from the acoustic and electric hearing. The aims of this study were (a) to identify speech cues extracted in electric hearin...

Full description

Bibliographic Details
Main Authors: Kong, Ying-Yee, Braida, Louis D.
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:en_US
Published: American Speech-Language-Hearing Association 2014
Online Access:http://hdl.handle.net/1721.1/86049
https://orcid.org/0000-0003-2538-9991
_version_ 1811068698670137344
author Kong, Ying-Yee
Braida, Louis D.
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Kong, Ying-Yee
Braida, Louis D.
author_sort Kong, Ying-Yee
collection MIT
description Purpose: Improved speech recognition in binaurally combined acoustic–electric stimulation (otherwise known as bimodal hearing) could arise when listeners integrate speech cues from the acoustic and electric hearing. The aims of this study were (a) to identify speech cues extracted in electric hearing and residual acoustic hearing in the low-frequency region and (b) to investigate cochlear implant (CI) users' ability to integrate speech cues across frequencies. Method: Normal-hearing (NH) and CI subjects participated in consonant and vowel identification tasks. Each subject was tested in 3 listening conditions: CI alone (vocoder speech for NH), hearing aid (HA) alone (low-pass filtered speech for NH), and both. Integration ability for each subject was evaluated using a model of optimal integration—the PreLabeling integration model (Braida, 1991). Results: Only a few CI listeners demonstrated bimodal benefit for phoneme identification in quiet. Speech cues extracted from the CI and the HA were highly redundant for consonants but were complementary for vowels. CI listeners also exhibited reduced integration ability for both consonant and vowel identification compared with their NH counterparts. Conclusion: These findings suggest that reduced bimodal benefits in CI listeners are due to insufficient complementary speech cues across ears, a decrease in integration ability, or both.
first_indexed 2024-09-23T07:59:47Z
format Article
id mit-1721.1/86049
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T07:59:47Z
publishDate 2014
publisher American Speech-Language-Hearing Association
record_format dspace
spelling mit-1721.1/860492022-09-30T01:32:28Z Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing Kong, Ying-Yee Braida, Louis D. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology. Research Laboratory of Electronics Kong, Ying-Yee Braida, Louis D. Purpose: Improved speech recognition in binaurally combined acoustic–electric stimulation (otherwise known as bimodal hearing) could arise when listeners integrate speech cues from the acoustic and electric hearing. The aims of this study were (a) to identify speech cues extracted in electric hearing and residual acoustic hearing in the low-frequency region and (b) to investigate cochlear implant (CI) users' ability to integrate speech cues across frequencies. Method: Normal-hearing (NH) and CI subjects participated in consonant and vowel identification tasks. Each subject was tested in 3 listening conditions: CI alone (vocoder speech for NH), hearing aid (HA) alone (low-pass filtered speech for NH), and both. Integration ability for each subject was evaluated using a model of optimal integration—the PreLabeling integration model (Braida, 1991). Results: Only a few CI listeners demonstrated bimodal benefit for phoneme identification in quiet. Speech cues extracted from the CI and the HA were highly redundant for consonants but were complementary for vowels. CI listeners also exhibited reduced integration ability for both consonant and vowel identification compared with their NH counterparts. Conclusion: These findings suggest that reduced bimodal benefits in CI listeners are due to insufficient complementary speech cues across ears, a decrease in integration ability, or both. National Organization for Hearing Research National Institute on Deafness and Other Communication Disorders (U.S.) (Grant R03 DC009684-01) National Institute on Deafness and Other Communication Disorders (U.S.) (Grant R01 DC007152-02) 2014-04-07T14:14:41Z 2014-04-07T14:14:41Z 2011-06 2010-07 Article http://purl.org/eprint/type/JournalArticle 1092-4388 1558-9102 http://hdl.handle.net/1721.1/86049 Kong, Y.-Y., and L. D. Braida. “Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing.” Journal of Speech, Language, and Hearing Research 54, no. 3 (June 1, 2011): 959–980. https://orcid.org/0000-0003-2538-9991 en_US http://dx.doi.org/10.1044/1092-4388(2010/10-0197) Journal of Speech, Language, and Hearing Research Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf American Speech-Language-Hearing Association PMC
spellingShingle Kong, Ying-Yee
Braida, Louis D.
Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing
title Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing
title_full Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing
title_fullStr Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing
title_full_unstemmed Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing
title_short Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing
title_sort cross frequency integration for consonant and vowel identification in bimodal hearing
url http://hdl.handle.net/1721.1/86049
https://orcid.org/0000-0003-2538-9991
work_keys_str_mv AT kongyingyee crossfrequencyintegrationforconsonantandvowelidentificationinbimodalhearing
AT braidalouisd crossfrequencyintegrationforconsonantandvowelidentificationinbimodalhearing