Search Results for author: Cheng-i Wang

Found 7 papers, 4 papers with code

Jam-ALT: A Formatting-Aware Lyrics Transcription Benchmark

1 code implementation23 Nov 2023 Ondřej Cífka, Constantinos Dimitriou, Cheng-i Wang, Hendrik Schreiber, Luke Miner, Fabian-Robert Stöter

Current automatic lyrics transcription (ALT) benchmarks focus exclusively on word content and ignore the finer nuances of written lyrics including formatting and punctuation, which leads to a potential misalignment with the creative products of musicians and songwriters as well as listeners' experiences.

Automatic Lyrics Transcription

TONet: Tone-Octave Network for Singing Melody Extraction from Polyphonic Music

1 code implementation2 Feb 2022 Ke Chen, Shuai Yu, Cheng-i Wang, Wei Li, Taylor Berg-Kirkpatrick, Shlomo Dubnov

In this paper, we propose TONet, a plug-and-play model that improves both tone and octave perceptions by leveraging a novel input representation and a novel network architecture.

Information Retrieval Melody Extraction +2

Towards Cross-Cultural Analysis using Music Information Dynamics

no code implementations24 Nov 2021 Shlomo Dubnov, Kevin Huang, Cheng-i Wang

The framework is based on an Music Information Dynamics model, a Variable Markov Oracle (VMO), and is extended with a variational representation learning of audio.

Representation Learning

Music SketchNet: Controllable Music Generation via Factorized Representations of Pitch and Rhythm

1 code implementation4 Aug 2020 Ke Chen, Cheng-i Wang, Taylor Berg-Kirkpatrick, Shlomo Dubnov

Drawing an analogy with automatic image completion systems, we propose Music SketchNet, a neural network framework that allows users to specify partial musical ideas guiding automatic music generation.

Music Generation

Deep Autotuner: a Pitch Correcting Network for Singing Performances

1 code implementation12 Feb 2020 Sanna Wager, George Tzanetakis, Cheng-i Wang, Minje Kim

We train our neural network model using a dataset of 4, 702 amateur karaoke performances selected for good intonation.

Deep Autotuner: A Data-Driven Approach to Natural-Sounding Pitch Correction for Singing Voice in Karaoke Performances

no code implementations3 Feb 2019 Sanna Wager, George Tzanetakis, Cheng-i Wang, Lijiang Guo, Aswin Sivaraman, Minje Kim

This approach differs from commercially used automatic pitch correction systems, where notes in the vocal tracks are shifted to be centered around notes in a user-defined score or mapped to the closest pitch among the twelve equal-tempered scale degrees.

Free-body Gesture Tracking and Augmented Reality Improvisation for Floor and Aerial Dance

no code implementations15 Sep 2015 Tammuz Dubnov, Cheng-i Wang

This paper describes an updated interactive performance system for floor and Aerial Dance that controls visual and sonic aspects of the presentation via a depth sensing camera (MS Kinect).

Cannot find the paper you are looking for? You can Submit a new open access paper.