Search Results for author: Youngmin Oh

Found 11 papers, 2 papers with code

Reset & Distill: A Recipe for Overcoming Negative Transfer in Continual Reinforcement Learning

no code implementations8 Mar 2024 Hongjoon Ahn, Jinu Hyeon, Youngmin Oh, Bosun Hwang, Taesup Moon

We argue that one of the main obstacles for developing effective Continual Reinforcement Learning (CRL) algorithms is the negative transfer issue occurring when the new task to learn arrives.

ACLS: Adaptive and Conditional Label Smoothing for Network Calibration

no code implementations ICCV 2023 Hyekang Park, Jongyoun Noh, Youngmin Oh, Donghyeon Baek, Bumsub Ham

We present in this paper an in-depth analysis of existing regularization-based methods, providing a better understanding on how they affect to network calibration.

Image Classification Semantic Segmentation

ALIFE: Adaptive Logit Regularizer and Feature Replay for Incremental Semantic Segmentation

no code implementations13 Oct 2022 Youngmin Oh, Donghyeon Baek, Bumsub Ham

Based on this, we then introduce an adaptive logit regularizer (ALI) that enables our model to better learn new categories, while retaining knowledge for previous ones.

Semantic Segmentation

OIMNet++: Prototypical Normalization and Localization-aware Learning for Person Search

1 code implementation21 Jul 2022 SangHoon Lee, Youngmin Oh, Donghyeon Baek, Junghyup Lee, Bumsub Ham

To this end, we introduce a novel normalization layer, dubbed ProtoNorm, that calibrates features from pedestrian proposals, while considering a long-tail distribution of person IDs, enabling L2 normalized person representations to be discriminative.

Person Re-Identification Person Search

Model-augmented Prioritized Experience Replay

no code implementations ICLR 2022 Youngmin Oh, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

Experience replay is an essential component in off-policy model-free reinforcement learning (MfRL).

Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation

no code implementations ICCV 2021 Donghyeon Baek, Youngmin Oh, Bumsub Ham

To this end, we leverage visual and semantic encoders to learn a joint embedding space, where the semantic encoder transforms semantic features to semantic prototypes that act as centers for visual features of corresponding classes.

Semantic Segmentation Zero-Shot Semantic Segmentation

Model-Augmented Q-learning

no code implementations7 Feb 2021 Youngmin Oh, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

We show that the proposed scheme, called Model-augmented $Q$-learning (MQL), obtains a policy-invariant solution which is identical to the solution obtained by learning with true reward.

Q-Learning

Learning to Sample with Local and Global Contexts in Experience Replay Buffer

no code implementations ICLR 2021 Youngmin Oh, Kimin Lee, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL).

Reinforcement Learning (RL)

Training Deep Neural Network in Limited Precision

no code implementations12 Oct 2018 Hyunsun Park, Jun Haeng Lee, Youngmin Oh, Sangwon Ha, Seungwon Lee

Energy and resource efficient training of DNNs will greatly extend the applications of deep learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.