Search Results for author: Cai Rong Zhao

Found 6 papers, 4 papers with code

Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection

no code implementations6 Jan 2024 Yuanpeng Tu, Boshen Zhang, Liang Liu, Yuxi Li, Xuhai Chen, Jiangning Zhang, Yabiao Wang, Chengjie Wang, Cai Rong Zhao

Industrial anomaly detection is generally addressed as an unsupervised task that aims at locating defects with only normal training samples.

Anomaly Detection

EA-HAS-Bench:Energy-Aware Hyperparameter and Architecture Search Benchmark

1 code implementation The Eleventh International Conference on Learning Representations 2023 Shuguang Dou, Xinyang Jiang, Cai Rong Zhao, Dongsheng Li

The energy consumption for training deep learning models is increasing at an alarming rate due to the growth of training data and model scale, resulting in a negative impact on carbon neutrality.

AutoML

Learning with Noisy labels via Self-supervised Adversarial Noisy Masking

1 code implementation CVPR 2023 Yuanpeng Tu, Boshen Zhang, Yuxi Li, Liang Liu, Jian Li, Jiangning Zhang, Yabiao Wang, Chengjie Wang, Cai Rong Zhao

Collecting large-scale datasets is crucial for training deep models, annotating the data, however, inevitably yields noisy labels, which poses challenges to deep learning algorithms.

Ranked #2 on Image Classification on Clothing1M (using extra training data)

Learning with noisy labels

Self-Supervised Likelihood Estimation with Energy Guidance for Anomaly Segmentation in Urban Scenes

1 code implementation14 Feb 2023 Yuanpeng Tu, Yuxi Li, Boshen Zhang, Liang Liu, Jiangning Zhang, Yabiao Wang, Cai Rong Zhao

Based on the proposed estimators, we devise an adaptive self-supervised training framework, which exploits the contextual reliance and estimated likelihood to refine mask annotations in anomaly areas.

Anomaly Detection Autonomous Driving

Robust Learning with Adaptive Sample Credibility Modeling

no code implementations29 Sep 2021 Boshen Zhang, Yuxi Li, Yuanpeng Tu, Yabiao Wang, Yang Xiao, Cai Rong Zhao, Chengjie Wang

For the clean set, we deliberately design a memory-based modulation scheme to dynamically adjust the contribution of each sample in terms of its historical credibility sequence during training, thus to alleviate the effect from potential hard noisy samples in clean set.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.