no code implementations • 27 Jul 2023 • Jihyeon Lee, Dain Kim, Doohae Jung, Boseop Kim, Kyoung-Woon On
In-context learning, which offers substantial advantages over fine-tuning, is predominantly observed in decoder-only models, while encoder-decoder (i. e., seq2seq) models excel in methods that rely on weight updates.
no code implementations • 11 Feb 2023 • Jeremiah Zhe Liu, Krishnamurthy Dj Dvijotham, Jihyeon Lee, Quan Yuan, Martin Strobel, Balaji Lakshminarayanan, Deepak Ramachandran
Standard empirical risk minimization (ERM) training can produce deep neural network (DNN) models that are accurate on average but under-perform in under-represented population subgroups, especially when there are imbalanced group distributions in the long-tailed training data.
no code implementations • 19 Oct 2022 • Jihyeon Lee, Wooyoung Kang, Eun-Sol Kim
It is well known that most of the conventional video question answering (VideoQA) datasets consist of easy questions requiring simple reasoning processes.
no code implementations • 21 Sep 2022 • Jihyeon Lee, Taehee Kim, Yunwon Tae, Cheonbok Park, Jaegul Choo
Incorporating personal preference is crucial in advanced machine translation tasks.
1 code implementation • 8 Nov 2021 • Christopher Yeh, Chenlin Meng, Sherrie Wang, Anne Driscoll, Erik Rozi, Patrick Liu, Jihyeon Lee, Marshall Burke, David B. Lobell, Stefano Ermon
Our goals for SustainBench are to (1) lower the barriers to entry for the machine learning community to contribute to measuring and achieving the SDGs; (2) provide standard benchmarks for evaluating machine learning models on tasks across a variety of SDGs; and (3) encourage the development of novel machine learning methods where improved model performance facilitates progress towards the SDGs.
no code implementations • ICCV 2021 • Eungyeup Kim, Jihyeon Lee, Jaegul Choo
Although previous approaches pre-define the type of dataset bias to prevent the network from learning it, recognizing the bias type in the real dataset is often prohibitive.
Ranked #3 on Facial Attribute Classification on bFFHQ
1 code implementation • NeurIPS 2021 • Jungsoo Lee, Eungyeup Kim, Juyoung Lee, Jihyeon Lee, Jaegul Choo
To this end, our method learns the disentangled representation of (1) the intrinsic attributes (i. e., those inherently defining a certain class) and (2) bias attributes (i. e., peripheral attributes causing the bias), from a large number of bias-aligned samples, the bias attributes of which have strong correlation with the target variable.
no code implementations • 24 Nov 2020 • Jihyeon Lee, Joseph Z. Xu, Kihyuk Sohn, Wenhan Lu, David Berthelot, Izzeddin Gur, Pranav Khaitan, Ke-Wei, Huang, Kyriacos Koupparis, Bernhard Kowatsch
To respond to disasters such as earthquakes, wildfires, and armed conflicts, humanitarian organizations require accurate and timely data in the form of damage assessments, which indicate what buildings and population centers have been most affected.
1 code implementation • 15 Jun 2020 • Jihyeon Lee, Dylan Grosz, Burak Uzkent, Sicheng Zeng, Marshall Burke, David Lobell, Stefano Ermon
Major decisions from governments and other large organizations rely on measurements of the populace's well-being, but making such measurements at a broad scale is expensive and thus infrequent in much of the developing world.
no code implementations • 30 Nov 2019 • Jihyeon Lee, Sho Arora
By demonstrating how our system can collect large amounts of data at little to no cost, we envision similar systems being used to improve performance on other tasks in the future.