no code implementations • 30 Jun 2022 • Wei Duan, Zhe Zhang, Yi Yu, Keizo Oyama
Generating melody from lyrics is an interesting yet challenging task in the area of artificial intelligence and music.
no code implementations • 1 Dec 2020 • Donghuo Zeng, Yi Yu, Keizo Oyama
This work present a music dataset named MusicTM-Dataset, which is utilized in improving the representation learning ability of different types of cross-modal retrieval (CMR).
no code implementations • 29 Jul 2020 • Donghuo Zeng, Yi Yu, Keizo Oyama
In this paper, we propose an unsupervised generative adversarial alignment representation (UGAAR) model to learn deep discriminative representations shared across three major musical modalities: sheet music, lyrics, and audio, where a deep neural network based architecture on three branches is jointly trained.
no code implementations • 10 Aug 2019 • Donghuo Zeng, Yi Yu, Keizo Oyama
ii) We propose an end-to-end deep model for cross-modal audio-visual learning where S-DCCA is trained to learn the semantic correlation between audio and visual modalities.
no code implementations • 10 Aug 2019 • Haoting Liang, Donghuo Zeng, Yi Yu, Keizo Oyama
Since many online music services emerged in recent years so that effective music recommendation systems are desirable.
2 code implementations • 10 Aug 2019 • Donghuo Zeng, Yi Yu, Keizo Oyama
In particular, two significant contributions are made: i) a better representation by constructing deep triplet neural network with triplet loss for optimal projections can be generated to maximize correlation in the shared subspace.