Search Results for author: Sungjun Cho

Found 13 papers, 4 papers with code

Learning Equi-angular Representations for Online Continual Learning

1 code implementation2 Apr 2024 Minhyuk Seo, Hyunseo Koh, Wonje Jeung, Minjae Lee, San Kim, Hankook Lee, Sungjun Cho, Sungik Choi, Hyunwoo Kim, Jonghyun Choi

Online continual learning suffers from an underfitted solution due to insufficient training for prompt model update (e. g., single-epoch training).

Continual Learning

3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation

no code implementations8 Sep 2023 Sungjun Cho, Dae-Woong Jeong, Sung Moon Ko, Jinwoo Kim, Sehui Han, Seunghoon Hong, Honglak Lee, Moontae Lee

Pretraining molecular representations from large unlabeled data is essential for molecular property prediction due to the high cost of obtaining ground-truth labels.

Denoising Knowledge Distillation +4

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

no code implementations27 Jan 2023 Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee

Since the recent advent of regulations for data protection (e. g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch.

Image Classification

Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

1 code implementation27 Oct 2022 Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong

The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input.

Stochastic Block Model

Improving Multi-fidelity Optimization with a Recurring Learning Rate for Hyperparameter Tuning

no code implementations26 Sep 2022 Hyunjae Lee, Gihyeon Lee, Junhwan Kim, Sungjun Cho, Dohyun Kim, Donggeun Yoo

However, it often results in selecting a sub-optimal configuration as training with the high-performing configuration typically converges slowly in an early phase.

Image Classification Transfer Learning

Grouping-matrix based Graph Pooling with Adaptive Number of Clusters

no code implementations7 Sep 2022 Sung Moon Ko, Sungjun Cho, Dae-Woong Jeong, Sehui Han, Moontae Lee, Honglak Lee

Conventional methods ask users to specify an appropriate number of clusters as a hyperparameter, then assume that all input graphs share the same number of clusters.

Binary Classification Molecular Property Prediction +2

Equivariant Hypergraph Neural Networks

1 code implementation22 Aug 2022 Jinwoo Kim, Saeyoon Oh, Sungjun Cho, Seunghoon Hong

Many problems in computer vision and machine learning can be cast as learning on hypergraphs that represent higher-order relations.

Pure Transformers are Powerful Graph Learners

1 code implementation6 Jul 2022 Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong

We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.

Graph Learning Graph Regression +1

Rebalancing Batch Normalization for Exemplar-based Class-Incremental Learning

no code implementations CVPR 2023 Sungmin Cha, Sungjun Cho, Dasol Hwang, Sunwon Hong, Moontae Lee, Taesup Moon

The main reason for the ineffectiveness of their method lies in not fully addressing the data imbalance issue, especially in computing the gradients for learning the affine transformation parameters of BN.

Class Incremental Learning Incremental Learning

On-the-Fly Rectification for Robust Large-Vocabulary Topic Inference

no code implementations12 Nov 2021 Moontae Lee, Sungjun Cho, Kun Dong, David Mimno, David Bindel

Across many data domains, co-occurrence statistics about the joint appearance of objects are powerfully informative.

Community Detection

Practical Correlated Topic Modeling and Analysis via the Rectified Anchor Word Algorithm

no code implementations IJCNLP 2019 Moontae Lee, Sungjun Cho, David Bindel, David Mimno

Despite great scalability on large data and their ability to understand correlations between topics, spectral topic models have not been widely used due to the absence of reliability in real data and lack of practical implementations.

Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.