1 code implementation • COLING 2022 • Wei Xiang, Zhenglin Wang, Lu Dai, Bang Wang
As the first trial of using this new paradigm for IDRR, this paper develops a Connective-cloze Prompt (ConnPrompt) to transform the relation prediction task as a connective-cloze task.
1 code implementation • Findings (ACL) 2022 • Wei Xiang, Bang Wang, Lu Dai, Yijun Mo
Prior studies use one attention mechanism to improve contextual semantic representation learning for implicit discourse relation recognition (IDRR).
no code implementations • 27 Mar 2024 • Erjia Chen, Bang Wang
In this paper, we challenge such an equal training assumption and propose a novel one backpropagation updating strategy, which keeps the normal gradient backpropagation for the item encoding tower, but cuts off the backpropagation for the user encoding tower.
1 code implementation • 14 Sep 2023 • Bang Wang, Zhenglin Wang, Wei Xiang, Yijun Mo
Implicit discourse relation recognition (IDRR) aims at recognizing the discourse relation between two text segments without an explicit connective.
1 code implementation • 29 Jul 2023 • Bin Liu, Qin Luo, Bang Wang
Learning contrastive representations from pairwise comparisons has achieved remarkable success in various fields, such as natural language processing, computer vision, and information retrieval.
no code implementations • 19 Jul 2023 • Wei Xiang, Chuanhong Zhan, Bang Wang
We use the probabilities of predicted events to evaluate the assumption rationality for the final event causality decision.
1 code implementation • 2 Jun 2023 • Bin Liu, Erjia Chen, Bang Wang
To achieve this win-win situation, we propose to intervene in model training through negative sampling thereby modifying model predictions.
1 code implementation • 18 May 2023 • Wei Xiang, Chao Liang, Bang Wang
Although an auxiliary task is not used to directly output final prediction, we argue that during the joint training some of its learned features can be useful to boost the main task.
1 code implementation • 11 Apr 2023 • Zizhuo Zhang, Bang Wang
Some recent \textit{news recommendation} (NR) methods introduce a Pre-trained Language Model (PLM) to encode news representation by following the vanilla pre-train and fine-tune paradigm with carefully-designed recommendation-specific neural networks and objective functions.
1 code implementation • 27 Jan 2023 • Bin Liu, Bang Wang, Tianrui Li
Recent years have witnessed many successful applications of contrastive learning in diverse domains, yet its self-supervised version still remains many exciting challenges.
1 code implementation • 28 Oct 2022 • Lu Dai, Bang Wang, Wei Xiang, Yijun Mo
Recently, prompt-tuning has attracted growing interests in event argument extraction (EAE).
1 code implementation • 3 Apr 2022 • Lingyun Lu, Bang Wang, Zizhuo Zhang, Shenghao Liu, Han Xu
Recent studies regard items as entities of a knowledge graph and leverage graph neural networks to assist item encoding, yet by considering each relation type individually.
1 code implementation • 2 Apr 2022 • Bin Liu, Bang Wang
Although previous studies have proposed some approaches to sample informative instances, few has been done to discriminating false negative from true negative for unbiased negative sampling.
no code implementations • 6 Mar 2022 • Wei Xiang, Bang Wang
As sentences are normally consist of multiple text segments, correct understanding of the theme of a discourse should take into consideration of the relations in between text segments.
no code implementations • 19 Feb 2022 • Zizhuo Zhang, Bang Wang
In this paper, we propose a strategy that first selects some informative item anchors and then encode items' potential relations to such anchors.