no code implementations • 3 Apr 2024 • Jaehyeon Kim, Keon Lee, Seungjun Chung, Jaewoong Cho
With the emergence of neural audio codecs, which encode multiple streams of discrete tokens from audio, large language models have recently gained attention as a promising approach for zero-shot Text-to-Speech (TTS) synthesis.
2 code implementations • 6 Feb 2024 • Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
State-space models (SSMs), such as Mamba (Gu & Dao, 2023), have been proposed as alternatives to Transformer networks in language modeling, by incorporating gating, convolutions, and input-dependent token selection to mitigate the quadratic cost of multi-head attention.
no code implementations • 19 Jan 2024 • Jimin Hong, Gibbeum Lee, Jaewoong Cho
Recent advancements in large language models have facilitated the execution of complex language tasks, not only in English but also in non-English languages.
1 code implementation • 25 Dec 2023 • Inkyu Park, Jaewoong Cho
Speech-driven 3D facial animation is challenging due to the scarcity of large-scale visual-audio datasets despite extensive research.
1 code implementation • 27 Oct 2023 • Sehyun Kwon, Jaeseung Park, Minkyu Kim, Jaewoong Cho, Ernest K. Ryu, Kangwook Lee
Classical clustering methods do not provide users with direct control of the clustering results, and the clustering results may not be consistent with the relevant criterion that a user has in mind.
no code implementations • 11 Sep 2023 • Jaechang Kim, Jeongyeon Hwang, Soheun Yi, Jaewoong Cho, Jungseul Ok
Neural networks often suffer from a feature preference problem, where they tend to overly rely on specific features to solve a task while disregarding other features, even if those neglected features are essential for the task.
no code implementations • 12 Jul 2023 • Seongjun Yang, Gibbeum Lee, Jaewoong Cho, Dimitris Papailiopoulos, Kangwook Lee
This paper presents "Predictive Pipelined Decoding (PPD)," an approach that speeds up greedy decoding in Large Language Models (LLMs) while maintaining the exact same output as the original decoding.
1 code implementation • 12 Jul 2023 • Jaewoong Cho, Kartik Sreenivasan, Keon Lee, Kyunghoo Mun, Soheun Yi, Jeong-Gwan Lee, Anna Lee, Jy-yong Sohn, Dimitris Papailiopoulos, Kangwook Lee
Contrastive learning has gained significant attention as a method for self-supervised learning.
1 code implementation • 12 Oct 2022 • Jaewoong Cho, Moonseok Choi, Changho Suh
We explore the fairness issue that arises in recommender systems.
no code implementations • NeurIPS 2020 • Jaewoong Cho, Gyeongjo Hwang, Changho Suh
As machine learning becomes prevalent in a widening array of sensitive applications such as job hiring and criminal justice, one critical aspect that machine learning classifiers should respect is to ensure fairness: guaranteeing the irrelevancy of a prediction output to sensitive attributes such as gender and race.
no code implementations • 25 Feb 2019 • Jaewoong Cho, Changho Suh
Generative Adversarial Networks (GANs) have become a powerful framework to learn generative models that arise across a wide variety of domains.