no code implementations • 16 Jan 2024 • Jiu Feng, Mehmet Hamza Erol, Joon Son Chung, Arda Senocak
We introduce multi-phase training of audio spectrogram transformers by connecting the seminal idea of coarse-to-fine with transformer models.
no code implementations • 18 Jul 2023 • Jiu Feng, Mehmet Hamza Erol, Joon Son Chung, Arda Senocak
To overcome this limitation, this paper proposes a training procedure to provide flexibility to standard AST models without architectural changes, allowing them to work with various patch sizes at the inference stage - FlexiAST.
2 code implementations • 22 Jul 2022 • Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D. Yoo, In So Kweon
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.