Search Results for author: Huawen Feng

Found 6 papers, 1 papers with code

Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings

no code implementations23 Feb 2024 Junlong Liu, Xichen Shang, Huawen Feng, Junhao Zheng, Qianli Ma

However, due to the token bias in pretrained language models, the models can not capture the fine-grained semantics in sentences, which leads to poor predictions.

Contrastive Learning Sentence +2

Balancing the Causal Effects in Class-Incremental Learning

no code implementations15 Feb 2024 Junhao Zheng, Ruiyan Wang, Chongzhi Zhang, Huawen Feng, Qianli Ma

In this way, the model is encouraged to adapt to all classes with causal effects from both new and old data and thus alleviates the causal imbalance problem.

Class Incremental Learning Continual Named Entity Recognition +6

Beyond Anti-Forgetting: Multimodal Continual Instruction Tuning with Positive Forward Transfer

no code implementations17 Jan 2024 Junhao Zheng, Qianli Ma, Zhen Liu, Binquan Wu, Huawen Feng

The discrepancy results in the model learning irrelevant information for old and pre-trained tasks, which leads to catastrophic forgetting and negative forward transfer.

Improving Factual Consistency of Text Summarization by Adversarially Decoupling Comprehension and Embellishment Abilities of LLMs

no code implementations30 Oct 2023 Huawen Feng, Yan Fan, Xiong Liu, Ting-En Lin, Zekun Yao, Yuchuan Wu, Fei Huang, Yongbin Li, Qianli Ma

Despite the recent progress in text summarization made by large language models (LLMs), they often generate summaries that are factually inconsistent with original articles, known as "hallucinations" in text generation.

Text Generation Text Summarization

Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference

1 code implementation19 Jun 2023 Junhao Zheng, Qianli Ma, Shengjie Qiu, Yue Wu, Peitian Ma, Junlong Liu, Huawen Feng, Xichen Shang, Haibin Chen

Intriguingly, the unified objective can be seen as the sum of the vanilla fine-tuning objective, which learns new knowledge from target data, and the causal objective, which preserves old knowledge from PLMs.

Attribute Causal Inference

Perturbation-based Self-supervised Attention for Attention Bias in Text Classification

no code implementations25 May 2023 Huawen Feng, Zhenxi Lin, Qianli Ma

In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn.

Sentence text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.