Search Results for author: Jihong Ouyang

Found 11 papers, 2 papers with code

Learning with Partial Labels from Semi-supervised Perspective

1 code implementation24 Nov 2022 Ximing Li, Yuanzhi Jiang, Changchun Li, Yiyuan Wang, Jihong Ouyang

Inspired by the impressive success of deep Semi-Supervised (SS) learning, we transform the PL learning problem into the SS learning problem, and propose a novel PL learning method, namely Partial Label learning with Semi-supervised Perspective (PLSP).

Contrastive Learning Partial Label Learning +1

Weakly Supervised Prototype Topic Model with Discriminative Seed Words: Modifying the Category Prior by Self-exploring Supervised Signals

no code implementations20 Nov 2021 Bing Wang, Yue Wang, Ximing Li, Jihong Ouyang

The recent generative dataless methods construct document-specific category priors by using seed word occurrences only, however, such category priors often contain very limited and even noisy supervised signals.

text-classification Text Classification +1

Variational Wasserstein Barycenters with c-Cyclical Monotonicity

no code implementations22 Oct 2021 Jinjin Chi, Zhiyao Yang, Jihong Ouyang, Ximing Li

The basic idea is to introduce a variational distribution as the approximation of the true continuous barycenter, so as to frame the barycenters computation problem as an optimization problem, where parameters of the variational distribution adjust the proxy distribution to be similar to the barycenter.

Stochastic Optimization

Who Is Your Right Mixup Partner in Positive and Unlabeled Learning

no code implementations ICLR 2022 Changchun Li, Ximing Li, Lei Feng, Jihong Ouyang

In this paper, we propose a novel PU learning method, namely Positive and unlabeled learning with Partially Positive Mixup (P3Mix), which simultaneously benefits from data augmentation and supervision correction with a heuristic mixup technique.

Data Augmentation

Semi-Supervised Text Classification with Balanced Deep Representation Distributions

no code implementations ACL 2021 Changchun Li, Ximing Li, Jihong Ouyang

They initialize the deep classifier by training over labeled texts; and then alternatively predict unlabeled texts as their pseudo-labels and train the deep classifier over the mixture of labeled and pseudo-labeled texts.

Semi-Supervised Text Classification text-classification

Channel-by-Channel Demosaicking Networks with Embedded Spectral Correlation

no code implementations24 Jun 2019 Niu Yan, Jihong Ouyang

Although the total size of our model is significantly smaller than the state of the art demosaicking networks, it achieves substantially higher performance in both demosaicking quality and computational cost, as validated by extensive experiments.

Demosaicking

Topic representation: finding more representative words in topic models

no code implementations23 Oct 2018 Jinjin Chi, Jihong Ouyang, Changchun Li, Xueyang Dong, Xi-Ming Li, Xinhua Wang

The top word list, i. e., the top-M words with highest marginal probability in a given topic, is the standard topic representation in topic models.

Topic Models

Low Cost Edge Sensing for High Quality Demosaicking

1 code implementation3 Jun 2018 Yan Niu, Jihong Ouyang, Wanli Zuo, Fuxin Wang

Compared to methods of similar computational cost, our method achieves substantially higher accuracy, Whereas compared to methods of similar accuracy, our method has significantly lower cost.

Demosaicking Vocal Bursts Intensity Prediction

Integrating Topic Modeling with Word Embeddings by Mixtures of vMFs

no code implementations COLING 2016 Xi-Ming Li, Jinjin Chi, Changchun Li, Jihong Ouyang, Bo Fu

Gaussian LDA integrates topic modeling with word embeddings by replacing discrete topic distribution over word types with multivariate Gaussian distribution on the embedding space.

Topic Models Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.