Auditory Models for Formant Frequency Discrimination of Vowel Sounds
As formant frequencies of vowel sounds are critical acoustic cues for vowel perception, human listeners need to be sensitive to formant frequency change. Numerous studies have found that formant frequency discrimination is affected by many factors like formant frequency, speech level, and fundamenta...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-07-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/14/8/429 |
_version_ | 1827729542910836736 |
---|---|
author | Can Xu Chang Liu |
author_facet | Can Xu Chang Liu |
author_sort | Can Xu |
collection | DOAJ |
description | As formant frequencies of vowel sounds are critical acoustic cues for vowel perception, human listeners need to be sensitive to formant frequency change. Numerous studies have found that formant frequency discrimination is affected by many factors like formant frequency, speech level, and fundamental frequency. Theoretically, to perceive a formant frequency change, human listeners with normal hearing may need a relatively constant change in the excitation and loudness pattern, and this internal change in auditory processing is independent of vowel category. Thus, the present study examined whether such metrics could explain the effects of formant frequency and speech level on formant frequency discrimination thresholds. Moreover, a simulation model based on the auditory excitation-pattern and loudness-pattern models was developed to simulate the auditory processing of vowel signals and predict thresholds of vowel formant discrimination. The results showed that predicted thresholds based on auditory metrics incorporating auditory excitation or loudness patterns near the target formant showed high correlations and low root-mean-square errors with human behavioral thresholds in terms of the effects of formant frequency and speech level). In addition, the simulation model, which particularly simulates the spectral processing of acoustic signals in the human auditory system, may be used to evaluate the auditory perception of speech signals for listeners with hearing impairments and/or different language backgrounds. |
first_indexed | 2024-03-10T23:51:12Z |
format | Article |
id | doaj.art-2d2a65eb54144b1a849221ce5631e6b1 |
institution | Directory Open Access Journal |
issn | 2078-2489 |
language | English |
last_indexed | 2024-03-10T23:51:12Z |
publishDate | 2023-07-01 |
publisher | MDPI AG |
record_format | Article |
series | Information |
spelling | doaj.art-2d2a65eb54144b1a849221ce5631e6b12023-11-19T01:34:30ZengMDPI AGInformation2078-24892023-07-0114842910.3390/info14080429Auditory Models for Formant Frequency Discrimination of Vowel SoundsCan Xu0Chang Liu1Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX 78712, USADepartment of Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX 78712, USAAs formant frequencies of vowel sounds are critical acoustic cues for vowel perception, human listeners need to be sensitive to formant frequency change. Numerous studies have found that formant frequency discrimination is affected by many factors like formant frequency, speech level, and fundamental frequency. Theoretically, to perceive a formant frequency change, human listeners with normal hearing may need a relatively constant change in the excitation and loudness pattern, and this internal change in auditory processing is independent of vowel category. Thus, the present study examined whether such metrics could explain the effects of formant frequency and speech level on formant frequency discrimination thresholds. Moreover, a simulation model based on the auditory excitation-pattern and loudness-pattern models was developed to simulate the auditory processing of vowel signals and predict thresholds of vowel formant discrimination. The results showed that predicted thresholds based on auditory metrics incorporating auditory excitation or loudness patterns near the target formant showed high correlations and low root-mean-square errors with human behavioral thresholds in terms of the effects of formant frequency and speech level). In addition, the simulation model, which particularly simulates the spectral processing of acoustic signals in the human auditory system, may be used to evaluate the auditory perception of speech signals for listeners with hearing impairments and/or different language backgrounds.https://www.mdpi.com/2078-2489/14/8/429auditory modelspeech processingvowel discrimination |
spellingShingle | Can Xu Chang Liu Auditory Models for Formant Frequency Discrimination of Vowel Sounds Information auditory model speech processing vowel discrimination |
title | Auditory Models for Formant Frequency Discrimination of Vowel Sounds |
title_full | Auditory Models for Formant Frequency Discrimination of Vowel Sounds |
title_fullStr | Auditory Models for Formant Frequency Discrimination of Vowel Sounds |
title_full_unstemmed | Auditory Models for Formant Frequency Discrimination of Vowel Sounds |
title_short | Auditory Models for Formant Frequency Discrimination of Vowel Sounds |
title_sort | auditory models for formant frequency discrimination of vowel sounds |
topic | auditory model speech processing vowel discrimination |
url | https://www.mdpi.com/2078-2489/14/8/429 |
work_keys_str_mv | AT canxu auditorymodelsforformantfrequencydiscriminationofvowelsounds AT changliu auditorymodelsforformantfrequencydiscriminationofvowelsounds |