Parallel Synthesis for Autoregressive Speech Generation

Authors: Po-chun Hsu, Da-rong Liu, Andy T. Liu, Hung-yi Lee
Abstract: Autoregressive models have achieved outstanding performance in neural speech synthesis tasks such as text-to-speech and voice conversion. A model with this autoregressive architecture predicts a sample at some time step conditioned on those at previous time steps. Though it can generate highly natural human speech, the iterative generation inevitably makes the synthesis time proportional to the utterance's length, leading to low efficiency. Many works were dedicated to generating the whole speech time sequence in parallel and then proposed GAN-based, flow-based, and score-based models. This paper proposed a new thought for autoregressive generation. Instead of iteratively predicting samples in a time sequence, the proposed model performs frequency-wise autoregressive generation (FAR) and bit-wise autoregressive generation (BAR) to synthesize speech. In FAR, a speech utterance is first split into different frequency subbands. The proposed model generates a subband conditioned on the previously generated one. A full band speech can then be reconstructed by using these generated subbands and a synthesis filter bank. Similarly, in BAR, an 8-bit quantized signal is generated iteratively from the first bit. By redesigning the autoregressive method to compute in domains other than the time domain, the number of iterations in the proposed model is no longer proportional to the utterance's length but the number of subbands/bits. The inference efficiency is hence significantly increased. Besides, a post-filter is employed to sample audio signals from output posteriors, and its training objective is designed based on the characteristics of the proposed autoregressive methods. The experimental results show that the proposed model is able to synthesize speech faster than real-time without GPU acceleration. Compared with the baseline autoregressive and non-autoregressive models, the proposed model achieves better MOS and shows its good generalization ability while synthesizing 44 kHz speech or utterances from unseen speakers.

Audio Samples

LJ Speech

Src
WaveNet
WaveRNN
Parallel WaveGAN
Proposed
Proposed (g-5)
Proposed (g-10)

TTS

WaveNet
WaveRNN
Parallel WaveGAN
Proposed
Proposed (g-5)
Proposed (g-10)

CMU ARCTIC

Src
WaveNet
WaveRNN
Parallel WaveGAN
Proposed

Internal 44 kHz Mandarin Speech Corpus

Src
WaveNet
WaveRNN
Parallel WaveGAN
Proposed

Ablation Study

Src
Proposed
w/o PF
w/o PF, BAR-2
w/o PF, w/o BAR
w/o PF, INV