EVA-GAN: Enhanced Various Audio Generation via Scalable Generative Adversarial Networks

31 Jan 2024  ·  Shijia Liao, Shiyi Lan, Arun George Zachariah ·

The advent of Large Models marks a new era in machine learning, significantly outperforming smaller models by leveraging vast datasets to capture and synthesize complex patterns. Despite these advancements, the exploration into scaling, especially in the audio generation domain, remains limited, with previous efforts didn't extend into the high-fidelity (HiFi) 44.1kHz domain and suffering from both spectral discontinuities and blurriness in the high-frequency domain, alongside a lack of robustness against out-of-domain data. These limitations restrict the applicability of models to diverse use cases, including music and singing generation. Our work introduces Enhanced Various Audio Generation via Scalable Generative Adversarial Networks (EVA-GAN), yields significant improvements over previous state-of-the-art in spectral and high-frequency reconstruction and robustness in out-of-domain data performance, enabling the generation of HiFi audios by employing an extensive dataset of 36,000 hours of 44.1kHz audio, a context-aware module, a Human-In-The-Loop artifact measurement toolkit, and expands the model to approximately 200 million parameters. Demonstrations of our work are available at https://double-blind-eva-gan.cc.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Speech Synthesis LibriTTS EVA-GAN-big PESQ 4.3536 # 1
Periodicity 0.0751 # 1
V/UV F1 0.9745 # 1
M-STFT 0.7982 # 2
Speech Synthesis LibriTTS EVA-GAN-base PESQ 4.0330 # 4
Periodicity 0.0942 # 4
V/UV F1 0.9658 # 2
M-STFT 0.9485 # 6

Methods


No methods listed for this paper. Add relevant methods here