Search Results for author: Taegyun Kwon

Found 7 papers, 4 papers with code

Towards Efficient and Real-Time Piano Transcription Using Neural Autoregressive Models

no code implementations10 Apr 2024 Taegyun Kwon, Dasaem Jeong, Juhan Nam

To this end, we propose novel architectures for convolutional recurrent neural networks, redesigning an existing autoregressive piano transcription model.

A Real-Time Lyrics Alignment System Using Chroma And Phonetic Features For Classical Vocal Performance

no code implementations17 Jan 2024 Jiyun Park, Sangeon Yong, Taegyun Kwon, Juhan Nam

The goal of real-time lyrics alignment is to take live singing audio as input and to pinpoint the exact position within given lyrics on the fly.

TräumerAI: Dreaming Music with StyleGAN

1 code implementation9 Feb 2021 Dasaem Jeong, Seungheon Doh, Taegyun Kwon

The goal of this paper to generate a visually appealing video that responds to music with a neural network so that each frame of the video reflects the musical characteristics of the corresponding audio clip.

 Ranked #1 on Music Auto-Tagging on TimeTravel (using extra training data)

Music Auto-Tagging

Polyphonic Piano Transcription Using Autoregressive Multi-State Note Model

no code implementations2 Oct 2020 Taegyun Kwon, Dasaem Jeong, Juhan Nam

Recent advances in polyphonic piano transcription have been made primarily by a deliberate design of neural network architectures that detect different note states such as onset or sustain and model the temporal evolution of the states.

VirtuosoNet: A Hierarchical RNN-based System for Modeling Expressive Piano Performance

1 code implementation ISMIR 2019 Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Kyogu Lee, Juhan Nam

In this paper, we present our application of deep neural network to modeling piano performance, which imitates the expressive control of tempo, dynamics, articulations and pedaling from pianists.

Music Performance Rendering

Cannot find the paper you are looking for? You can Submit a new open access paper.