An Electroglottograph Auxiliary Neural Network for Target Speaker Extraction
The extraction of a target speaker from mixtures of different speakers has attracted extensive amounts of attention and research. Previous studies have proposed several methods, such as SpeakerBeam, to tackle this speech extraction problem using clean speech from the target speaker to provide inform...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-12-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/13/1/469 |
_version_ | 1797626256122970112 |
---|---|
author | Lijiang Chen Zhendong Mo Jie Ren Chunfeng Cui Qi Zhao |
author_facet | Lijiang Chen Zhendong Mo Jie Ren Chunfeng Cui Qi Zhao |
author_sort | Lijiang Chen |
collection | DOAJ |
description | The extraction of a target speaker from mixtures of different speakers has attracted extensive amounts of attention and research. Previous studies have proposed several methods, such as SpeakerBeam, to tackle this speech extraction problem using clean speech from the target speaker to provide information. However, clean speech cannot be obtained immediately in most cases. In this study, we addressed this problem by extracting features from the electroglottographs (EGGs) of target speakers. An EGG is a laryngeal function detection technology that can detect the impedance and condition of vocal cords. Since EGGs have excellent anti-noise performance due to the collection method, they can be obtained in rather noisy environments. In order to obtain clean speech from target speakers out of the mixtures of different speakers, we utilized deep learning methods and used EGG signals as additional information to extract target speaker. In this way, we could extract target speaker from mixtures of different speakers without needing clean speech from the target speakers. According to the characteristics of the EGG signals, we developed an EGG_auxiliary network to train a speaker extraction model under the assumption that EGG signals carry information about speech signals. Additionally, we took the correlations between EGGs and speech signals in silent and unvoiced segments into consideration to develop a new network involving EGG preprocessing. We achieved improvements in the scale invariant signal-to-distortion ratio improvement (SISDRi) of 0.89 dB on the Chinese Dual-Mode Emotional Speech Database (CDESD) and 1.41 dB on the EMO-DB dataset. In addition, our methods solved the problem of poor performance with target speakers of the same gender and the different between the same gender situation and the problem of greatly reduced precision under the low SNR circumstances. |
first_indexed | 2024-03-11T10:07:47Z |
format | Article |
id | doaj.art-583e15cd40814dc18e6b5f868fe659cd |
institution | Directory Open Access Journal |
issn | 2076-3417 |
language | English |
last_indexed | 2024-03-11T10:07:47Z |
publishDate | 2022-12-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj.art-583e15cd40814dc18e6b5f868fe659cd2023-11-16T14:57:27ZengMDPI AGApplied Sciences2076-34172022-12-0113146910.3390/app13010469An Electroglottograph Auxiliary Neural Network for Target Speaker ExtractionLijiang Chen0Zhendong Mo1Jie Ren2Chunfeng Cui3Qi Zhao4School of Electronic and Information Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing 100191, ChinaSchool of Electronic and Information Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing 100191, ChinaSchool of Electronic and Information Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing 100191, ChinaSchool of Electronic and Information Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing 100191, ChinaSchool of Electronic and Information Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing 100191, ChinaThe extraction of a target speaker from mixtures of different speakers has attracted extensive amounts of attention and research. Previous studies have proposed several methods, such as SpeakerBeam, to tackle this speech extraction problem using clean speech from the target speaker to provide information. However, clean speech cannot be obtained immediately in most cases. In this study, we addressed this problem by extracting features from the electroglottographs (EGGs) of target speakers. An EGG is a laryngeal function detection technology that can detect the impedance and condition of vocal cords. Since EGGs have excellent anti-noise performance due to the collection method, they can be obtained in rather noisy environments. In order to obtain clean speech from target speakers out of the mixtures of different speakers, we utilized deep learning methods and used EGG signals as additional information to extract target speaker. In this way, we could extract target speaker from mixtures of different speakers without needing clean speech from the target speakers. According to the characteristics of the EGG signals, we developed an EGG_auxiliary network to train a speaker extraction model under the assumption that EGG signals carry information about speech signals. Additionally, we took the correlations between EGGs and speech signals in silent and unvoiced segments into consideration to develop a new network involving EGG preprocessing. We achieved improvements in the scale invariant signal-to-distortion ratio improvement (SISDRi) of 0.89 dB on the Chinese Dual-Mode Emotional Speech Database (CDESD) and 1.41 dB on the EMO-DB dataset. In addition, our methods solved the problem of poor performance with target speakers of the same gender and the different between the same gender situation and the problem of greatly reduced precision under the low SNR circumstances.https://www.mdpi.com/2076-3417/13/1/469speech extractionSpeakerBeamelectroglottographpre-processing |
spellingShingle | Lijiang Chen Zhendong Mo Jie Ren Chunfeng Cui Qi Zhao An Electroglottograph Auxiliary Neural Network for Target Speaker Extraction Applied Sciences speech extraction SpeakerBeam electroglottograph pre-processing |
title | An Electroglottograph Auxiliary Neural Network for Target Speaker Extraction |
title_full | An Electroglottograph Auxiliary Neural Network for Target Speaker Extraction |
title_fullStr | An Electroglottograph Auxiliary Neural Network for Target Speaker Extraction |
title_full_unstemmed | An Electroglottograph Auxiliary Neural Network for Target Speaker Extraction |
title_short | An Electroglottograph Auxiliary Neural Network for Target Speaker Extraction |
title_sort | electroglottograph auxiliary neural network for target speaker extraction |
topic | speech extraction SpeakerBeam electroglottograph pre-processing |
url | https://www.mdpi.com/2076-3417/13/1/469 |
work_keys_str_mv | AT lijiangchen anelectroglottographauxiliaryneuralnetworkfortargetspeakerextraction AT zhendongmo anelectroglottographauxiliaryneuralnetworkfortargetspeakerextraction AT jieren anelectroglottographauxiliaryneuralnetworkfortargetspeakerextraction AT chunfengcui anelectroglottographauxiliaryneuralnetworkfortargetspeakerextraction AT qizhao anelectroglottographauxiliaryneuralnetworkfortargetspeakerextraction AT lijiangchen electroglottographauxiliaryneuralnetworkfortargetspeakerextraction AT zhendongmo electroglottographauxiliaryneuralnetworkfortargetspeakerextraction AT jieren electroglottographauxiliaryneuralnetworkfortargetspeakerextraction AT chunfengcui electroglottographauxiliaryneuralnetworkfortargetspeakerextraction AT qizhao electroglottographauxiliaryneuralnetworkfortargetspeakerextraction |