Search Results for author: Jihoon Tack

Found 11 papers, 9 papers with code

Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts

1 code implementation13 Mar 2024 Shengzhuang Chen, Jihoon Tack, Yunqiao Yang, Yee Whye Teh, Jonathan Richard Schwarz, Ying WEI

Conventional wisdom suggests parameter-efficient fine-tuning of foundation models as the state-of-the-art method for transfer learning in vision, replacing the rich literature of alternatives such as meta-learning.

Domain Generalization Few-Shot Image Classification +2

Online Adaptation of Language Models with a Memory of Amortized Contexts

1 code implementation7 Mar 2024 Jihoon Tack, Jaehyung Kim, Eric Mitchell, Jinwoo Shin, Yee Whye Teh, Jonathan Richard Schwarz

We propose an amortized feature extraction and memory-augmentation approach to compress and extract information from new documents into compact modulations stored in a memory bank.

Language Modelling Meta-Learning

STUNT: Few-shot Tabular Learning with Self-generated Tasks from Unlabeled Tables

1 code implementation2 Mar 2023 Jaehyun Nam, Jihoon Tack, Kyungmin Lee, Hankook Lee, Jinwoo Shin

Learning with few labeled tabular samples is often an essential requirement for industrial machine learning applications as varieties of tabular data suffer from high annotation costs or have difficulties in collecting new samples for novel tasks.

Few-Shot Learning

Modality-Agnostic Variational Compression of Implicit Neural Representations

no code implementations23 Jan 2023 Jonathan Richard Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, Jinwoo Shin

We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR).

Data Compression

Meta-Learning with Self-Improving Momentum Target

1 code implementation11 Oct 2022 Jihoon Tack, Jongjin Park, Hankook Lee, Jaeho Lee, Jinwoo Shin

The idea of using a separately trained target model (or teacher) to improve the performance of the student model has been increasingly popular in various machine learning domains, and meta-learning is no exception; a recent discovery shows that utilizing task-wise target models can significantly boost the generalization performance.

Knowledge Distillation Meta-Learning +1

Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

1 code implementation ICLR 2022 Sihyun Yu, Jihoon Tack, Sangwoo Mo, Hyunsu Kim, Junho Kim, Jung-Woo Ha, Jinwoo Shin

In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue.

Generative Adversarial Network Video Generation

Meta-Learning Sparse Implicit Neural Representations

1 code implementation NeurIPS 2021 Jaeho Lee, Jihoon Tack, Namhoon Lee, Jinwoo Shin

Implicit neural representations are a promising new avenue of representing general signals by learning a continuous function that, parameterized as a neural network, maps the domain of a signal to its codomain; the mapping from spatial coordinates of an image to its pixel values, for example.

Meta-Learning

Entropy Weighted Adversarial Training

no code implementations ICML Workshop AML 2021 Minseon Kim, Jihoon Tack, Jinwoo Shin, Sung Ju Hwang

Adversarial training methods, which minimizes the loss of adversarially-perturbed training examples, have been extensively studied as a solution to improve the robustness of the deep neural networks.

Consistency Regularization for Adversarial Robustness

1 code implementation ICML Workshop AML 2021 Jihoon Tack, Sihyun Yu, Jongheon Jeong, Minseon Kim, Sung Ju Hwang, Jinwoo Shin

Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial robustness of deep neural networks.

Adversarial Robustness Data Augmentation

Adversarial Self-Supervised Contrastive Learning

2 code implementations NeurIPS 2020 Minseon Kim, Jihoon Tack, Sung Ju Hwang

In this paper, we propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.

Adversarial Attack Contrastive Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.