Search Results for author: Li-Chia Yang

Found 8 papers, 5 papers with code

CCATMos: Convolutional Context-aware Transformer Network for Non-intrusive Speech Quality Assessment

no code implementations4 Nov 2022 Yuchen Liu, Li-Chia Yang, Alex Pawlicki, Marko Stamenovic

Speech quality assessment has been a critical component in many voice communication related applications such as telephony and online conferencing.

Self-Supervised Learning for Speech Enhancement through Synthesis

1 code implementation4 Nov 2022 Bryce Irvin, Marko Stamenovic, Mikolaj Kegler, Li-Chia Yang

Modern speech enhancement (SE) networks typically implement noise suppression through time-frequency masking, latent representation masking, or discriminative signal prediction.

Denoising Self-Supervised Learning +2

Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators

no code implementations3 Nov 2021 Marko Stamenovic, Nils L. Westhausen, Li-Chia Yang, Carl Jensen, Alex Pawlicki

Using weight pruning, we show that we are able to compress an already compact model's memory footprint by a factor of 42x from 3. 7MB to 87kB while only losing 0. 1 dB SDR in performance.

Model Compression Speech Enhancement

TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids

1 code implementation20 May 2020 Igor Fedorov, Marko Stamenovic, Carl Jensen, Li-Chia Yang, Ari Mandell, Yiming Gan, Matthew Mattina, Paul N. Whatmough

Modern speech enhancement algorithms achieve remarkable noise suppression by means of large recurrent neural networks (RNNs).

Model Compression Quantization +1

Neural Wavetable: a playable wavetable synthesizer using neural networks

1 code implementation13 Nov 2018 Lamtharn Hantrakul, Li-Chia Yang

We present Neural Wavetable, a proof-of-concept wavetable synthesizer that uses neural networks to generate playable wavetables.

MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment

8 code implementations19 Sep 2017 Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, Yi-Hsuan Yang

The three models, which differ in the underlying assumptions and accordingly the network architectures, are referred to as the jamming model, the composer model and the hybrid model.

Music Generation

Revisiting the problem of audio-based hit song prediction using convolutional neural networks

no code implementations5 Apr 2017 Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen

Being able to predict whether a song can be a hit has impor- tant applications in the music industry.

MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation

4 code implementations31 Mar 2017 Li-Chia Yang, Szu-Yu Chou, Yi-Hsuan Yang

We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google's MelodyRNN models, each time using the same priming melody.

Generative Adversarial Network Music Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.