Towards Robust Neural Vocoding for Speech Generation: A Survey
Authors: Po-chun Hsu, Chun-hsuan Wang, Andy T. Liu, Hung-yi Lee
Abstract: Recently, neural vocoders have been widely used in speech synthesis tasks, including text-to-speech and voice conversion. However, when encountering data distribution mismatch between training and inference, neural vocoders trained on real data often degrade in voice quality for unseen scenarios. In this paper, we train four common neural vocoders, including WaveNet, WaveRNN, FFTNet, Parallel WaveGAN alternately on five different datasets. To study the robustness of neural vocoders, we evaluate the models using acoustic features from seen/unseen speakers, seen/unseen languages, a text-to-speech model, and a voice conversion model.
We found out that the speaker variety is much more important for achieving a universal vocoder than the language. Through our experiments, we show that WaveNet and WaveRNN are more suitable for text-to-speech models, while Parallel WaveGAN is more suitable for voice conversion applications. Great amount of subjective MOS results in naturalness for all vocoders are presented for future studies.
MOS Results & Audio Samples
Robustness to Human Speech
Seen Speakers and Seen Language
Unseen Speakers and Seen Language
Unseen Speakers and Unseen Language
The Influence of Genders
Seen Gender and Seen Language
Unseen Gender and Seen Language
Seen Gender and Unseen Language
Unseen Gender and Unseen Language