Non-Parallel Whisper-to-Normal Speaking Style Conversion Using Auxiliary Classifier Variational Autoencoder

This paper is concerned with non-parallel whisper-to-normal speaking-style conversion (W2N-SC), which converts whispered speech into normal speech without using parallel training data. Most relevant to this task is voice conversion (VC), which converts one speaker’s voice to another. Howe...

Full description

Bibliographic Details
Main Authors: Shogo Seki, Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10109017/
Description
Summary:This paper is concerned with non-parallel whisper-to-normal speaking-style conversion (W2N-SC), which converts whispered speech into normal speech without using parallel training data. Most relevant to this task is voice conversion (VC), which converts one speaker’s voice to another. However, the W2N-SC task differs from the regular VC task in three main respects. First, unlike normal speech, whispered speech contains little or no pitch information. Second, whispered speech usually has significantly less energy than normal speech and is therefore more susceptible to external noise. Third, in the actual usage scenario of W2N-SC, users may suddenly switch voice modes from whispered to normal speech, or vice versa, meaning that the speaking-style of input speech cannot be assumed in advance. To clarify whether existing VC techniques can successfully handle these task-specific concerns and how they should be modified to better address them, we consider a variational autoencoder (VAE)-based VC method as a baseline and examine what modifications to this method would be effective for the current task. Specifically, we study the effects of 1) a self-supervised training scheme called filling-in-frames (FIF); 2) data augmentation (DA) using noisy speech samples; and 3) an architecture that allows for any-to-many conversions. Through experimental evaluation of the W2N-SC and speaker conversion tasks, we confirmed that, especially in the W2N-SC task, the version incorporating the above modifications works better than the baseline VC model applied as is.
ISSN:2169-3536