Search Results for author: Songlin Hu

Found 54 papers, 31 papers with code

Uncertainty-aware Propagation Structure Reconstruction for Fake News Detection

no code implementations COLING 2022 Lingwei Wei, Dou Hu, Wei Zhou, Songlin Hu

In this paper, we propose a novel dual graph-based model, Uncertainty-aware Propagation Structure Reconstruction (UPSR) for improving fake news detection.

Fake News Detection

Event Temporal Relation Extraction based on Retrieval-Augmented on LLMs

no code implementations22 Mar 2024 Xiaobin Zhang, Liangjun Zang, Qianwen Liu, Shuchong Wei, Songlin Hu

With the rise of prompt engineering, it is important to design effective prompt templates and verbalizers to extract relevant knowledge.

Event Relation Extraction Prompt Engineering +3

Drop your Decoder: Pre-training with Bag-of-Word Prediction for Dense Passage Retrieval

1 code implementation20 Jan 2024 Guangyuan Ma, Xing Wu, Zijia Lin, Songlin Hu

In this study, we aim to shed light on this issue by revealing that masked auto-encoder (MAE) pre-training with enhanced decoding significantly improves the term coverage of input tokens in dense representations, compared to vanilla BERT checkpoints.

Passage Retrieval Retrieval +1

Structured Probabilistic Coding

1 code implementation21 Dec 2023 Dou Hu, Lingwei Wei, Yaxin Liu, Wei Zhou, Songlin Hu

It can enhance the generalization ability of pre-trained language models for better language understanding.

Natural Language Understanding Representation Learning

Are Large Language Models Good Fact Checkers: A Preliminary Study

no code implementations29 Nov 2023 Han Cao, Lingwei Wei, Mengyang Chen, Wei Zhou, Songlin Hu

However, they encounter challenges in effectively handling Chinese fact verification and the entirety of the fact-checking pipeline due to language inconsistencies and hallucinations.

Fact Checking Fact Verification

MiLe Loss: a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models

1 code implementation30 Oct 2023 Zhenpeng Su, Xing Wu, Xue Bai, Zijia Lin, Hui Chen, Guiguang Ding, Wei Zhou, Songlin Hu

Experiments reveal that models incorporating the proposed MiLe Loss can gain consistent performance improvement on downstream benchmarks.

Language Modelling

CT-GAT: Cross-Task Generative Adversarial Attack based on Transferability

1 code implementation22 Oct 2023 Minxuan Lv, Chengwei Dai, Kun Li, Wei Zhou, Songlin Hu

Neural network models are vulnerable to adversarial examples, and adversarial transferability further increases the risk of adversarial attacks.

Adversarial Attack

HC3 Plus: A Semantic-Invariant Human ChatGPT Comparison Corpus

1 code implementation6 Sep 2023 Zhenpeng Su, Xing Wu, Wei Zhou, Guangyuan Ma, Songlin Hu

ChatGPT has gained significant interest due to its impressive performance, but people are increasingly concerned about its potential risks, particularly around the detection of AI-generated content (AIGC), which is often difficult for untrained humans to identify.

Question Answering

Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval

no code implementations16 Aug 2023 Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu

Concretely, we leverage the capabilities of LLMs for document expansion, i. e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval.

Contrastive Learning Language Modelling +3

Dial-MAE: ConTextual Masked Auto-Encoder for Retrieval-based Dialogue Systems

1 code implementation7 Jun 2023 Zhenpeng Su, Xing Wu, Wei Zhou, Guangyuan Ma, Songlin Hu

Dialogue response selection aims to select an appropriate response from several candidates based on a given user and system utterance history.

Language Modelling Masked Language Modeling +1

Supervised Adversarial Contrastive Learning for Emotion Recognition in Conversations

1 code implementation2 Jun 2023 Dou Hu, Yinan Bao, Lingwei Wei, Wei Zhou, Songlin Hu

To address this, we propose a supervised adversarial contrastive learning (SACL) framework for learning class-spread structured representations in a supervised manner.

Contrastive Learning Emotion Recognition in Conversation

PUNR: Pre-training with User Behavior Modeling for News Recommendation

1 code implementation25 Apr 2023 Guangyuan Ma, Hongtao Liu, Xing Wu, Wanhui Qian, Zhepeng Lv, Qing Yang, Songlin Hu

Firstly, we introduce the user behavior masking pre-training task to recover the masked user behaviors based on their contextual behaviors.

News Recommendation Unsupervised Pre-training

CoT-MoTE: Exploring ConTextual Masked Auto-Encoder Pre-training with Mixture-of-Textual-Experts for Passage Retrieval

no code implementations20 Apr 2023 Guangyuan Ma, Xing Wu, Peng Wang, Songlin Hu

Siamese or fully separated dual-encoders are often adopted as basic retrieval architecture in the pre-training and fine-tuning stages for encoding queries and passages into their latent embedding spaces.

Passage Retrieval Retrieval

CoT-MAE v2: Contextual Masked Auto-Encoder with Multi-view Modeling for Passage Retrieval

no code implementations5 Apr 2023 Xing Wu, Guangyuan Ma, Peng Wang, Meng Lin, Zijia Lin, Fuzheng Zhang, Songlin Hu

As an effective representation bottleneck pretraining technique, the contextual masked auto-encoder utilizes contextual embedding to assist in the reconstruction of passages.

Passage Retrieval Retrieval +1

Query-as-context Pre-training for Dense Passage Retrieval

2 code implementations19 Dec 2022 Xing Wu, Guangyuan Ma, Wanhui Qian, Zijia Lin, Songlin Hu

Recently, methods have been developed to improve the performance of dense passage retrieval by using context-supervised pre-training.

Contrastive Learning Passage Retrieval +1

RaP: Redundancy-aware Video-language Pre-training for Text-Video Retrieval

1 code implementation13 Oct 2022 Xing Wu, Chaochen Gao, Zijia Lin, Zhongyuan Wang, Jizhong Han, Songlin Hu

Sparse sampling is also likely to miss important frames corresponding to some text portions, resulting in textual redundancy.

Contrastive Learning Retrieval +1

InfoCSE: Information-aggregated Contrastive Learning of Sentence Embeddings

2 code implementations8 Oct 2022 Xing Wu, Chaochen Gao, Zijia Lin, Jizhong Han, Zhongyuan Wang, Songlin Hu

Contrastive learning has been extensively studied in sentence embedding learning, which assumes that the embeddings of different views of the same sentence are closer.

Contrastive Learning Language Modelling +5

ConTextual Masked Auto-Encoder for Dense Passage Retrieval

2 code implementations16 Aug 2022 Xing Wu, Guangyuan Ma, Meng Lin, Zijia Lin, Zhongyuan Wang, Songlin Hu

Dense passage retrieval aims to retrieve the relevant passages of a query from a large corpus based on dense representations (i. e., vectors) of the query and the passages.

Passage Retrieval Retrieval +1

Speaker-Guided Encoder-Decoder Framework for Emotion Recognition in Conversation

no code implementations7 Jun 2022 Yinan Bao, Qianwen Ma, Lingwei Wei, Wei Zhou, Songlin Hu

Since the dependencies between speakers are complex and dynamic, which consist of intra- and inter-speaker dependencies, the modeling of speaker-specific information is a vital role in ERC.

Emotion Recognition in Conversation

Text Smoothing: Enhance Various Data Augmentation Methods on Text Classification Tasks

1 code implementation ACL 2022 Xing Wu, Chaochen Gao, Meng Lin, Liangjun Zang, Zhongyuan Wang, Songlin Hu

Before entering the neural network, a token is generally converted to the corresponding one-hot representation, which is a discrete distribution of the vocabulary.

Data Augmentation Language Modelling +3

DistilCSE: Effective Knowledge Distillation For Contrastive Sentence Embeddings

1 code implementation10 Dec 2021 Chaochen Gao, Xing Wu, Peng Wang, Jue Wang, Liangjun Zang, Zhongyuan Wang, Songlin Hu

To tackle that, we propose an effective knowledge distillation framework for contrastive sentence embeddings, termed DistilCSE.

Contrastive Learning Knowledge Distillation +5

ESimCSE: Enhanced Sample Building Method for Contrastive Learning of Unsupervised Sentence Embedding

2 code implementations COLING 2022 Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, Songlin Hu

Unsup-SimCSE takes dropout as a minimal data augmentation method, and passes the same input sentence to a pre-trained Transformer encoder (with dropout turned on) twice to obtain the two corresponding embeddings to build a positive pair.

Contrastive Learning Data Augmentation +5

Towards Propagation Uncertainty: Edge-enhanced Bayesian Graph Convolutional Networks for Rumor Detection

1 code implementation ACL 2021 Lingwei Wei, Dou Hu, Wei Zhou, Zhaojuan Yue, Songlin Hu

Detecting rumors on social media is a very critical task with significant implications to the economy, public health, etc.

PEN4Rec: Preference Evolution Networks for Session-based Recommendation

1 code implementation17 Jun 2021 Dou Hu, Lingwei Wei, Wei Zhou, Xiaoyong Huai, Zhiqi Fang, Songlin Hu

The process can strengthen the effect of relevant sequential behaviors during the preference evolution and weaken the disturbance from preference drifting.

Retrieval Session-Based Recommendations

SRLF: A Stance-aware Reinforcement Learning Framework for Content-based Rumor Detection on Social Media

no code implementations10 May 2021 Chunyuan Yuan, Wanhui Qian, Qianwen Ma, Wei Zhou, Songlin Hu

The rapid development of social media changes the lifestyle of people and simultaneously provides an ideal place for publishing and disseminating rumors, which severely exacerbates social panic and triggers a crisis of social trust.

Embedding API Dependency Graph for Neural Code Generation

1 code implementation29 Mar 2021 Chen Lyu, Ruyun Wang, Hongyu Zhang, Hanwen Zhang, Songlin Hu

In recent years, many deep learning based approaches have been proposed, which can generate a sequence of code from a sequence of textual program description.

Code Generation Graph Embedding

Integrating External Event Knowledge for Script Learning

no code implementations COLING 2020 Shangwen Lv, Fuqing Zhu, Songlin Hu

In the knowledge retrieval stage, we select relevant external event knowledge from ASER.

Retrieval

Hierarchical Interaction Networks with Rethinking Mechanism for Document-level Sentiment Analysis

1 code implementation16 Jul 2020 Lingwei Wei, Dou Hu, Wei Zhou, Xuehai Tang, Xiaodan Zhang, Xin Wang, Jizhong Han, Songlin Hu

Furthermore, we design a Sentiment-based Rethinking mechanism (SR) by refining the HIN with sentiment label information to learn a more sentiment-aware document representation.

Sentiment Analysis Sentiment Classification +1

DyHGCN: A Dynamic Heterogeneous Graph Convolutional Network to Learn Users' Dynamic Preferences for Information Diffusion Prediction

no code implementations9 Jun 2020 Chunyuan Yuan, Jiacheng Li, Wei Zhou, Yijun Lu, Xiaodan Zhang, Songlin Hu

For one thing, previous works cannot jointly utilize both the social network and diffusion graph for prediction, which is insufficient to model the complexity of the diffusion process and results in unsatisfactory prediction performance.

Misinformation

AutoSUM: Automating Feature Extraction and Multi-user Preference Simulation for Entity Summarization

1 code implementation25 May 2020 Dongjun Wei, Yaxin Liu, Fuqing Zhu, Liangjun Zang, Wei Zhou, Yijun Lu, Songlin Hu

In this paper, a novel integration method called AutoSUM is proposed for automatic feature extraction and multi-user preference simulation to overcome the drawbacks of previous methods.

feature selection Word Embeddings

Pre-training Text Representations as Meta Learning

no code implementations12 Apr 2020 Shangwen Lv, Yuechen Wang, Daya Guo, Duyu Tang, Nan Duan, Fuqing Zhu, Ming Gong, Linjun Shou, Ryan Ma, Daxin Jiang, Guihong Cao, Ming Zhou, Songlin Hu

In this work, we introduce a learning algorithm which directly optimizes model's ability to learn text representations for effective learning of downstream tasks.

Language Modelling Meta-Learning +2

Data Augmentation for Copy-Mechanism in Dialogue State Tracking

no code implementations22 Feb 2020 Xiaohui Song, Liangjun Zang, Yipeng Su, Xing Wu, Jizhong Han, Songlin Hu

While several state-of-the-art approaches to dialogue state tracking (DST) have shown promising performances on several benchmarks, there is still a significant performance gap between seen slot values (i. e., values that occur in both training set and test set) and unseen ones (values that occur in training set but not in test set).

Data Augmentation Dialogue State Tracking

Beyond Statistical Relations: Integrating Knowledge Relations into Style Correlations for Multi-Label Music Style Classification

1 code implementation9 Nov 2019 Qianwen Ma, Chunyuan Yuan, Wei Zhou, Jizhong Han, Songlin Hu

Based on the two types of relations, we use a graph convolutional network to learn the deep correlations between styles automatically.

General Classification

Multi-hop Selector Network for Multi-turn Response Selection in Retrieval-based Chatbots

1 code implementation IJCNLP 2019 Chunyuan Yuan, Wei Zhou, Mingming Li, Shangwen Lv, Fuqing Zhu, Jizhong Han, Songlin Hu

Existing works mainly focus on matching candidate responses with every context utterance on multiple levels of granularity, which ignore the side effect of using excessive context information.

Conversational Response Selection Retrieval

Learning review representations from user and product level information for spam detection

no code implementations10 Sep 2019 Chunyuan Yuan, Wei Zhou, Qianwen Ma, Shangwen Lv, Jizhong Han, Songlin Hu

Then, we use orthogonal decomposition and fusion attention to learn a user, review, and product representation from the review information.

Spam detection

Jointly embedding the local and global relations of heterogeneous graph for rumor detection

1 code implementation10 Sep 2019 Chunyuan Yuan, Qianwen Ma, Wei Zhou, Jizhong Han, Songlin Hu

The development of social media has revolutionized the way people communicate, share information and make decisions, but it also provides an ideal platform for publishing and spreading rumors.

TransSent: Towards Generation of Structured Sentences with Discourse Marker

no code implementations5 Sep 2019 Xing Wu, Dongjun Wei, Liangjun Zang, Jizhong Han, Songlin Hu

Automatic and human evaluation results show that TransSent can generate structured sentences with high quality, and has certain scalability in different tasks.

Dialogue Generation Sentence

ESA: Entity Summarization with Attention

2 code implementations25 May 2019 Dongjun Wei, Yaxin Liu, Fuqing Zhu, Liangjun Zang, Wei Zhou, Jizhong Han, Songlin Hu

Entity summarization aims at creating brief but informative descriptions of entities from knowledge graphs.

Clustering Knowledge Graphs

Imbalanced Sentiment Classification Enhanced with Discourse Marker

no code implementations28 Mar 2019 Tao Zhang, Xing Wu, Meng Lin, Jizhong Han, Songlin Hu

Imbalanced data commonly exists in real world, espacially in sentiment-related corpus, making it difficult to train a classifier to distinguish latent sentiment in text data.

Classification Data Augmentation +3

AccUDNN: A GPU Memory Efficient Accelerator for Training Ultra-deep Neural Networks

no code implementations21 Jan 2019 Jinrong Guo, Wantao Liu, Wang Wang, Qu Lu, Songlin Hu, Jizhong Han, Ruixuan Li

Typically, Ultra-deep neural network(UDNN) tends to yield high-quality model, but its training process is usually resource intensive and time-consuming.

Conditional BERT Contextual Augmentation

5 code implementations17 Dec 2018 Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, Songlin Hu

BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model.

Data Augmentation Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.