no code implementations • 13 Apr 2024 • Zhenwei Wang, Qiule Sun, Bingbing Zhang, Pengfei Wang, Jianxin Zhang, Qiang Zhang
The other is to classification on feature distribution of visual tokens from vision encoder.
no code implementations • 31 May 2023 • Jianxin Zhang, Clayton Scott
Label embedding is a framework for multiclass classification problems where each label is represented by a distinct vector of some fixed dimension, and training involves matching model output to the vector representing the correct label.
1 code implementation • 4 Mar 2022 • Jianxin Zhang, Yutong Wang, Clayton Scott
Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels.
1 code implementation • NeurIPS 2020 • Clayton Scott, Jianxin Zhang
Learning from label proportions (LLP) is a weakly supervised setting for classification in which unlabeled training instances are grouped into bags, and each bag is annotated with the proportion of each class occurring in that bag.
no code implementations • 10 Oct 2019 • Clayton Scott, Jianxin Zhang
We study binary classification in the setting where the learner is presented with multiple corrupted training samples, with possibly different sample sizes and degrees of corruption, and introduce an approach based on minimizing a weighted combination of corruption-corrected empirical risks.