Conditioning and Sampling in Variational Diffusion Models for Speech Super-Resolution

Published in 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023

Recommended citation: Chin-Yun Yu, Sung-Lin Yeh, György Fazekas, and Hao Tang, "Conditioning and Sampling in Variational Diffusion Models for Speech Super-Resolution", IEEE International Conference on Acoustics, Speech and Signal Processing, June 2023. https://ieeexplore.ieee.org/abstract/document/10095103

Recently, diffusion models (DMs) have been increasingly used in audio processing tasks, including speech super-resolution (SR), which aims to restore high-frequency content given low-resolution speech utterances. This is commonly achieved by conditioning the network of noise predictor with low-resolution audio. In this paper, we propose a novel sampling algorithm that communicates the information of the low-resolution audio via the reverse sampling process of DMs. The proposed method can be a drop-in replacement for the vanilla sampling process and can significantly improve the performance of the existing works. Moreover, by coupling the proposed sampling method with an unconditional DM, i.e., a DM with no auxiliary inputs to its noise predictor, we can generalize it to a wide range of SR setups. We also attain state-of-the-art results on the VCTK Multi-Speaker benchmark with this novel formulation.