Advanced Generative Adversarial Networks (GANs) are remarkable in generating in- telligible audio from a random latent vector. In this paper, we examine the task of recovering the latent vector of both synthesized and real audio. We propose an auto- encoder inspired technique to train a deep residual neural network architecture to project audio synthesized by WaveGAN into the corresponding latent space with near identical reconstruction performance. To accommodate for the lack of an original latent vector for real audio, we optimize the residual network on a multi-level feature loss between the real audio samples and the reconstructed audio of the predicted latent vectors. In the case of synthesized audio, the Mean Squared Error (MSE) between the ground truth and recovered latent vector is minimized as well. We further investigated the audio reconstruction performance when several gradient optimization steps are applied to the predicted latent vector. Through our auto-encoder inspired method of training on real and synthesized audio, we are able to predict a latent vector that corresponds to a reasonable reconstruction of real audio. After training, we investigated the latent representations of real and synthesized audio files. Our analysis revealed distinct latent representational patterns for real and synthesized audio which can be used for deepfake audio detection. Even though we evaluated our method on WaveGAN, our proposed method is universal and can be applied to any other GANs.
Article ID: 2021S11
Publisher: Canadian Artificial Intelligence Association