Search Results for author: Hsuan-Tien Lin

Found 44 papers, 18 papers with code

CAD-DA: Controllable Anomaly Detection after Domain Adaptation by Statistical Inference

no code implementations23 Oct 2023 Vo Nguyen Le Duy, Hsuan-Tien Lin, Ichiro Takeuchi

We propose a novel statistical method for testing the results of anomaly detection (AD) under domain adaptation (DA), which we call CAD-DA -- controllable AD under DA.

Anomaly Detection Domain Adaptation +1

From SMOTE to Mixup for Deep Imbalanced Classification

1 code implementation29 Aug 2023 Wei-Chao Cheng, Tan-Ha Mai, Hsuan-Tien Lin

Traditionally, the well-known synthetic minority oversampling technique (SMOTE) for data augmentation, a data mining approach for imbalanced learning, has been used to improve this generalization.

Classification Data Augmentation +1

Score-based Conditional Generation with Fewer Labeled Data by Self-calibrating Classifier Guidance

no code implementations9 Jul 2023 Paul Kuo-Ming Huang, Si-An Chen, Hsuan-Tien Lin

Score-based generative models (SGMs) are a popular family of deep generative models that achieve leading image generation quality.

Image Generation

Re-Benchmarking Pool-Based Active Learning for Binary Classification

1 code implementation15 Jun 2023 Po-Yi Lu, Chun-Liang Li, Hsuan-Tien Lin

Active learning is a paradigm that significantly enhances the performance of machine learning models when acquiring labeled data is expensive.

Active Learning Benchmarking +2

Understanding and Mitigating Spurious Correlations in Text Classification with Neighborhood Analysis

1 code implementation23 May 2023 Oscar Chew, Hsuan-Tien Lin, Kai-Wei Chang, Kuan-Hao Huang

Recent research has revealed that machine learning models have a tendency to leverage spurious correlations that exist in the training set but may not hold true in general circumstances.

text-classification Text Classification

Enhancing Label Sharing Efficiency in Complementary-Label Learning with Label Augmentation

no code implementations15 May 2023 Wei-I Lin, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

Our analysis reveals that the efficiency of implicit label sharing is closely related to the performance of existing CLL models.

Weakly-supervised Learning

CLCIFAR: CIFAR-Derived Benchmark Datasets with Human Annotated Complementary Labels

1 code implementation15 May 2023 Hsiu-Hsuan Wang, Wei-I Lin, Hsuan-Tien Lin

Through extensive benchmark experiments, we discovered a notable decline in performance when transitioning from synthetic datasets to real-world datasets.

Weakly-supervised Learning

SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective

no code implementations1 Mar 2023 Xiwei Xuan, Ziquan Deng, Hsuan-Tien Lin, Zhaodan Kong, Kwan-Liu Ma

Researchers have proposed various methods for visually interpreting the Convolutional Neural Network (CNN) via saliency maps, which include Class-Activation-Map (CAM) based approaches as a leading family.

Semi-Supervised Domain Adaptation with Source Label Adaptation

1 code implementation CVPR 2023 Yu-Chu Yu, Hsuan-Tien Lin

Semi-Supervised Domain Adaptation (SSDA) involves learning to classify unseen target data with a few labeled and lots of unlabeled target data, along with many labeled source data from a related domain.

Domain Adaptation Pseudo Label +1

Reducing Training Sample Memorization in GANs by Training with Memorization Rejection

1 code implementation21 Oct 2022 Andrew Bai, Cho-Jui Hsieh, Wendy Kan, Hsuan-Tien Lin

In this paper, we propose memorization rejection, a training scheme that rejects generated samples that are near-duplicates of training samples during training.

Generative Adversarial Network Memorization

Reduction from Complementary-Label Learning to Probability Estimates

no code implementations20 Sep 2022 Wei-I Lin, Hsuan-Tien Lin

In this paper, we sidestep those limitations with a novel perspective--reduction to probability estimates of complementary classes.

Weakly-supervised Learning

Improving Model Compatibility of Generative Adversarial Networks by Boundary Calibration

no code implementations3 Nov 2021 Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin

To improve GAN in terms of model compatibility, we propose Boundary-Calibration GANs (BCGANs), which leverage the boundary information from a set of pre-trained classifiers using the original data.

A Unified View of cGANs with and without Classifiers

1 code implementation NeurIPS 2021 Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin

Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.

Active Refinement for Multi-Label Learning: A Pseudo-Label Approach

no code implementations29 Sep 2021 Cheng-Yu Hsieh, Wei-I Lin, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

The goal of multi-label learning (MLL) is to associate a given instance with its relevant labels from a set of concepts.

Active Learning Multi-Label Learning +1

On Training Sample Memorization: Lessons from Benchmarking Generative Modeling with a Large-scale Competition

1 code implementation6 Jun 2021 Ching-Yuan Bai, Hsuan-Tien Lin, Colin Raffel, Wendy Chih-wen Kan

Many recent developments on generative models for natural images have relied on heuristically-motivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric.

Benchmarking Memorization

Accurate and Clear Precipitation Nowcasting with Consecutive Attention and Rain-map Discrimination

no code implementations16 Feb 2021 Ashesh, Buo-Fu Chen, Treng-Shi Huang, Boyo Chen, Chia-Tung Chang, Hsuan-Tien Lin

We propose a new deep learning model for precipitation nowcasting that includes both the discrimination and attention techniques.

Weather Forecasting

Adaptive and Generative Zero-Shot Learning

1 code implementation ICLR 2021 Yu-Ying Chou, Hsuan-Tien Lin, Tyng-Luh Liu

In addition, to break the limit of training with images only from seen classes, we design a generative scheme to simultaneously generate virtual class labels and their visual features by sampling and interpolating over seen counterparts.

Generalized Zero-Shot Learning

A Large-scale Study on Training Sample Memorization in Generative Modeling

no code implementations1 Jan 2021 Ching-Yuan Bai, Hsuan-Tien Lin, Colin Raffel, Wendy Kan

Many recent developments on generative models for natural images have relied on heuristically-motivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric.

Benchmarking Memorization

On the Role of Pre-training for Meta Few-Shot Learning

no code implementations1 Jan 2021 Chia-You Chen, Hsuan-Tien Lin, Gang Niu, Masashi Sugiyama

One is to (pre-)train a classifier with examples from known classes, and then transfer the pre-trained classifier to unknown classes using the new examples.

Disentanglement Few-Shot Learning

360-Degree Gaze Estimation in the Wild Using Multiple Zoom Scales

1 code implementation15 Sep 2020 Ashesh, Chu-Song Chen, Hsuan-Tien Lin

Technically, the gaze information can be inferred from two different magnification levels: face orientation and eye orientation.

Gaze Estimation

Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels

no code implementations ICML 2020 Yu-Ting Chou, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

In weakly supervised learning, unbiased risk estimator(URE) is a powerful tool for training classifiers when training and test data are drawn from different distributions.

Weakly-supervised Learning

SERIL: Noise Adaptive Speech Enhancement using Regularization-based Incremental Learning

1 code implementation24 May 2020 Chi-Chang Lee, Yu-Chen Lin, Hsuan-Tien Lin, Hsin-Min Wang, Yu Tsao

The results verify that the SERIL model can effectively adjust itself to new noise environments while overcoming the catastrophic forgetting issue.

Incremental Learning Speech Enhancement

Learning from Label Proportions with Consistency Regularization

no code implementations29 Oct 2019 Kuen-Han Tsai, Hsuan-Tien Lin

The problem of learning from label proportions (LLP) involves training classifiers with weak labels on bags of instances, rather than strong labels on individual instances.

Benchmarking Tropical Cyclone Rapid Intensification with Satellite Images and Attention-based Deep Models

1 code implementation25 Sep 2019 Ching-Yuan Bai, Buo-Fu Chen, Hsuan-Tien Lin

In addition, the human-driven nature of such an approach makes it difficult to reproduce and benchmark prediction models.

Benchmarking

A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision

no code implementations ICLR Workshop LLD 2019 Cheng-Yu Hsieh, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

To address the need, we propose a special weakly supervised MLL problem that not only focuses on the situation of limited fine-grained supervision but also leverages the hierarchical relationship between the coarse concepts and the fine-grained ones.

Meta-Learning Multi-Label Learning +1

Active Deep Q-learning with Demonstration

no code implementations6 Dec 2018 Si-An Chen, Voot Tangkaratt, Hsuan-Tien Lin, Masashi Sugiyama

In this work, we propose Active Reinforcement Learning with Demonstration (ARLD), a new framework to streamline RL in terms of demonstration efforts by allowing the RL agent to query for demonstration actively during training.

Q-Learning reinforcement-learning +1

REFUEL: Exploring Sparse Features in Deep Reinforcement Learning for Fast Disease Diagnosis

no code implementations NeurIPS 2018 Yu-Shao Peng, Kai-Fu Tang, Hsuan-Tien Lin, Edward Chang

This paper proposes REFUEL, a reinforcement learning method with two techniques: {\em reward shaping} and {\em feature rebuilding}, to improve the performance of online symptom checking for disease diagnosis.

reinforcement-learning Reinforcement Learning (RL)

Compatibility Family Learning for Item Recommendation and Generation

1 code implementation2 Dec 2017 Yong-Siang Shih, Kai-Yueh Chang, Hsuan-Tien Lin, Min Sun

In our learned space, we introduce a novel Projected Compatibility Distance (PCD) function which is differentiable and ensures diversity by aiming for at least one prototype to be close to a compatible item, whereas none of the prototypes are close to an incompatible item.

Generative Adversarial Network

Soft Methodology for Cost-and-error Sensitive Classification

no code implementations26 Oct 2017 Te-Kang Jan, Da-Wei Wang, Chi-Hung Lin, Hsuan-Tien Lin

Many real-world data mining applications need varying cost for different types of classification errors and thus call for cost-sensitive classification algorithms.

Classification General Classification

libact: Pool-based Active Learning in Python

5 code implementations1 Oct 2017 Yao-Yuan Yang, Shao-Chuan Lee, Yu-An Chung, Tung-En Wu, Si-An Chen, Hsuan-Tien Lin

libact is a Python package designed to make active learning easier for general users.

Active Learning

Active Sampling of Pairs and Points for Large-scale Linear Bipartite Ranking

no code implementations24 Aug 2017 Wei-Yuan Shen, Hsuan-Tien Lin

In this paper, we develop a novel active sampling scheme within the pair-wise approach to conduct bipartite ranking efficiently.

Active Learning

Cost-Sensitive Reference Pair Encoding for Multi-Label Learning

1 code implementation29 Nov 2016 Yao-Yuan Yang, Kuan-Hao Huang, Chih-Wei Chang, Hsuan-Tien Lin

Label space expansion for multi-label classification (MLC) is a methodology that encodes the original label vectors to higher dimensional codes before training and decodes the predicted codes back to the label vectors during testing.

Active Learning Multi-Label Classification +1

Cost-Sensitive Deep Learning with Layer-Wise Cost Estimation

no code implementations16 Nov 2016 Yu-An Chung, Shao-Wen Yang, Hsuan-Tien Lin

While deep neural networks have succeeded in several visual applications, such as object recognition, detection, and localization, by reaching very high classification accuracies, it is important to note that many real-world applications demand varying costs for different types of misclassification errors, thus requiring cost-sensitive classification algorithms.

Classification General Classification +1

Can Active Learning Experience Be Transferred?

no code implementations2 Aug 2016 Hong-Min Chu, Hsuan-Tien Lin

Empirical studies demonstrate that the learned experience not only is competitive with existing strategies on most single datasets, but also can be transferred across datasets to improve the performance on future learning tasks.

Active Learning

Automatic Bridge Bidding Using Deep Reinforcement Learning

no code implementations12 Jul 2016 Chih-Kuan Yeh, Hsuan-Tien Lin

Existing artificial intelligence systems for bridge bidding rely on and are thus restricted by human-designed bidding systems or features.

Decision Making reinforcement-learning +1

Cost-Sensitive Label Embedding for Multi-Label Classification

2 code implementations30 Mar 2016 Kuan-Hao Huang, Hsuan-Tien Lin

Furthermore, extensive experimental results demonstrate that CLEMS is significantly better than a wide spectrum of existing LE algorithms and state-of-the-art cost-sensitive algorithms across different cost functions.

Classification General Classification +1

Cost-aware Pre-training for Multiclass Cost-sensitive Deep Learning

no code implementations30 Nov 2015 Yu-An Chung, Hsuan-Tien Lin, Shao-Wen Yang

Deep learning has been one of the most prominent machine learning techniques nowadays, being the state-of-the-art on a broad range of applications where automatic feature extraction is needed.

General Classification

Rivalry of Two Families of Algorithms for Memory-Restricted Streaming PCA

no code implementations4 Jun 2015 Chun-Liang Li, Hsuan-Tien Lin, Chi-Jen Lu

In this paper, we analyze the convergence rate of a representative algorithm with decayed learning rate (Oja and Karhunen, 1985) in the first family for the general $k>1$ case.

Vocal Bursts Valence Prediction

Feature-aware Label Space Dimension Reduction for Multi-label Classification

no code implementations NeurIPS 2012 Yao-Nan Chen, Hsuan-Tien Lin

In addition, the approach can be extended to a kernelized version that allows the use of sophisticated feature combinations to assist LSDR.

Classification Compressive Sensing +3

Cannot find the paper you are looking for? You can Submit a new open access paper.