no code implementations • Findings (EMNLP) 2021 • Yiming Wang, Ximing Li, Xiaotang Zhou, Jihong Ouyang
Short text nowadays has become a more fashionable form of text data, e. g., Twitter posts, news titles, and product reviews.
no code implementations • 18 Dec 2023 • Jihong Ouyang, Zhiyao Yang, Silong Liang, Bing Wang, Yimeng Wang, Ximing Li
And we propose an ABSA-specific augmentation method to create such augmentations.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2
1 code implementation • 24 Nov 2022 • Ximing Li, Yuanzhi Jiang, Changchun Li, Yiyuan Wang, Jihong Ouyang
Inspired by the impressive success of deep Semi-Supervised (SS) learning, we transform the PL learning problem into the SS learning problem, and propose a novel PL learning method, namely Partial Label learning with Semi-supervised Perspective (PLSP).
no code implementations • 20 Nov 2021 • Bing Wang, Yue Wang, Ximing Li, Jihong Ouyang
The recent generative dataless methods construct document-specific category priors by using seed word occurrences only, however, such category priors often contain very limited and even noisy supervised signals.
no code implementations • 22 Oct 2021 • Jinjin Chi, Zhiyao Yang, Jihong Ouyang, Ximing Li
The basic idea is to introduce a variational distribution as the approximation of the true continuous barycenter, so as to frame the barycenters computation problem as an optimization problem, where parameters of the variational distribution adjust the proxy distribution to be similar to the barycenter.
no code implementations • ICLR 2022 • Changchun Li, Ximing Li, Lei Feng, Jihong Ouyang
In this paper, we propose a novel PU learning method, namely Positive and unlabeled learning with Partially Positive Mixup (P3Mix), which simultaneously benefits from data augmentation and supervision correction with a heuristic mixup technique.
no code implementations • ACL 2021 • Changchun Li, Ximing Li, Jihong Ouyang
They initialize the deep classifier by training over labeled texts; and then alternatively predict unlabeled texts as their pseudo-labels and train the deep classifier over the mixture of labeled and pseudo-labeled texts.
no code implementations • 24 Jun 2019 • Niu Yan, Jihong Ouyang
Although the total size of our model is significantly smaller than the state of the art demosaicking networks, it achieves substantially higher performance in both demosaicking quality and computational cost, as validated by extensive experiments.
no code implementations • 23 Oct 2018 • Jinjin Chi, Jihong Ouyang, Changchun Li, Xueyang Dong, Xi-Ming Li, Xinhua Wang
The top word list, i. e., the top-M words with highest marginal probability in a given topic, is the standard topic representation in topic models.
1 code implementation • 3 Jun 2018 • Yan Niu, Jihong Ouyang, Wanli Zuo, Fuxin Wang
Compared to methods of similar computational cost, our method achieves substantially higher accuracy, Whereas compared to methods of similar accuracy, our method has significantly lower cost.
no code implementations • COLING 2016 • Xi-Ming Li, Jinjin Chi, Changchun Li, Jihong Ouyang, Bo Fu
Gaussian LDA integrates topic modeling with word embeddings by replacing discrete topic distribution over word types with multivariate Gaussian distribution on the embedding space.