Search Results for author: Kangil Kim

Found 10 papers, 4 papers with code

Asymptotic Midpoint Mixup for Margin Balancing and Moderate Broadening

no code implementations26 Jan 2024 Hoyong Kim, Semi Lee, Kangil Kim

Then, we validate the intra-class collapse effects in coarse-to-fine transfer learning and the inter-class collapse effects in imbalanced learning on long-tailed datasets.

Representation Learning Transfer Learning

CFASL: Composite Factor-Aligned Symmetry Learning for Disentanglement in Variational AutoEncoder

no code implementations17 Jan 2024 Hee-Jun Jung, Jaehyoung Jeong, Kangil Kim

Symmetries of input and latent vectors have provided valuable insights for disentanglement learning in VAEs. However, only a few works were proposed as an unsupervised method, and even these works require known factor information in training data.

Decoder Disentanglement +1

Revisiting Softmax Masking: Stop Gradient for Enhancing Stability in Replay-based Continual Learning

no code implementations26 Sep 2023 Hoyong Kim, Minchan Kwon, Kangil Kim

In replay-based methods for continual learning, replaying input samples in episodic memory has shown its effectiveness in alleviating catastrophic forgetting.

Continual Learning Incremental Learning

Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning

no code implementations23 May 2023 Minchan Kwon, Kangil Kim

In this paper, we address problems of applying adversarial training to CICL, which is well-known defense method against adversarial attack.

Adversarial Attack Continual Learning +1

Feature Structure Distillation with Centered Kernel Alignment in BERT Transferring

1 code implementation1 Apr 2022 Hee-Jun Jung, Doyeon Kim, Seung-Hoon Na, Kangil Kim

To resolve it in transferring, we investigate distillation of structures of representations specified to three types: intra-feature, local inter-feature, global inter-feature structures.

Knowledge Distillation Language Modelling

Impact of representation matching with neural machine translation

1 code implementation Applied Sciences 2022 HeeSeung Jung, Kangil Kim, Jong-Hun Shin, Seung-Hoon Na, SangKeun Jung, Sangmin Woo

Most neural machine translation models are implemented as a conditional language model framework composed of encoder and decoder models.

Decoder Language Modelling +2

Learning from Matured Dumb Teacher for Fine Generalization

no code implementations12 Aug 2021 HeeSeung Jung, Kangil Kim, Hoyong Kim, Jong-Hun Shin

The flexibility of decision boundaries in neural networks that are unguided by training data is a well-known problem typically resolved with generalization methods.

Image Classification Knowledge Distillation

What and When to Look?: Temporal Span Proposal Network for Video Relation Detection

1 code implementation15 Jul 2021 Sangmin Woo, Junhyug Noh, Kangil Kim

TSPN tells when to look: it simultaneously predicts start-end timestamps (i. e., temporal spans) and categories of the all possible relations by utilizing full video context.

Relation Video Visual Relation Detection +1

Tackling the Challenges in Scene Graph Generation with Local-to-Global Interactions

1 code implementation16 Jun 2021 Sangmin Woo, Junhyug Noh, Kangil Kim

To quantify how much LOGIN is aware of relational direction, a new diagnostic task called Bidirectional Relationship Classification (BRC) is also proposed.

 Ranked #1 on Scene Graph Classification on Visual Genome (Recall@20 metric)

Bidirectional Relationship Classification Graph Generation +4

Concept Equalization to Guide Correct Training of Neural Machine Translation

no code implementations IJCNLP 2017 Kangil Kim, Jong-Hun Shin, Seung-Hoon Na, SangKeun Jung

Neural machine translation decoders are usually conditional language models to sequentially generate words for target sentences.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.