no code implementations • 10 Apr 2024 • Taegyun Kwon, Dasaem Jeong, Juhan Nam
To this end, we propose novel architectures for convolutional recurrent neural networks, redesigning an existing autoregressive piano transcription model.
no code implementations • 17 Jan 2024 • Jiyun Park, Sangeon Yong, Taegyun Kwon, Juhan Nam
The goal of real-time lyrics alignment is to take live singing audio as input and to pinpoint the exact position within given lyrics on the fly.
1 code implementation • 14 Nov 2022 • Eunjin Choi, Yoonjin Chung, Seolhee Lee, JongIk Jeon, Taegyun Kwon, Juhan Nam
In addition, they generally lack high-level annotations such as emotion tags.
1 code implementation • 9 Feb 2021 • Dasaem Jeong, Seungheon Doh, Taegyun Kwon
The goal of this paper to generate a visually appealing video that responds to music with a neural network so that each frame of the video reflects the musical characteristics of the corresponding audio clip.
Ranked #1 on Music Auto-Tagging on TimeTravel (using extra training data)
no code implementations • 2 Oct 2020 • Taegyun Kwon, Dasaem Jeong, Juhan Nam
Recent advances in polyphonic piano transcription have been made primarily by a deliberate design of neural network architectures that detect different note states such as onset or sustain and model the temporal evolution of the states.
1 code implementation • ISMIR 2019 • Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Kyogu Lee, Juhan Nam
In this paper, we present our application of deep neural network to modeling piano performance, which imitates the expressive control of tempo, dynamics, articulations and pedaling from pianists.
1 code implementation • ICML 2019 • Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Juhan Nam
Music score is often handled as one-dimensional sequential data.