Search Results for author: Yun Luo

Found 20 papers, 13 papers with code

XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners

1 code implementation9 Oct 2023 Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Fang Guo, Qinglin Qi, Jie zhou, Yue Zhang

Active learning (AL), which aims to construct an effective training set by iteratively curating the most formative unlabeled data for annotation, has been widely used in low-resource tasks.

Active Learning text-classification +1

Enhancing Argument Structure Extraction with Efficient Leverage of Contextual Information

1 code implementation8 Oct 2023 Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Jie zhou, Yue Zhang

However, we observe that merely concatenating sentences in a contextual window does not fully utilize contextual information and can sometimes lead to excessive attention on less informative sentences.

An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning

1 code implementation17 Aug 2023 Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie zhou, Yue Zhang

Catastrophic forgetting (CF) is a phenomenon that occurs in machine learning when a model forgets previously learned information while acquiring new knowledge.

Reading Comprehension

Efficient Prediction of Peptide Self-assembly through Sequential and Graphical Encoding

1 code implementation17 Jul 2023 Zihan Liu, Jiaqi Wang, Yun Luo, Shuang Zhao, Wenbin Li, Stan Z. Li

In recent years, there has been an explosion of research on the application of deep learning to the prediction of various peptide properties, due to the significant development and market potential of peptides.

Benchmarking

Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion

no code implementations20 May 2023 Yun Luo, Xiaotian Lin, Zhen Yang, Fandong Meng, Jie zhou, Yue Zhang

It is seldom considered to adapt the decision boundary for new representations and in this paper we propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning (SCCL), In our method, a contrastive loss is used to directly learn representations for different tasks and a limited number of data samples are saved as the classification criterion.

Classification Continual Learning +1

Investigating Forgetting in Pre-Trained Representations Through Continual Learning

no code implementations10 May 2023 Yun Luo, Zhen Yang, Xuefeng Bai, Fandong Meng, Jie zhou, Yue Zhang

Intuitively, the representation forgetting can influence the general knowledge stored in pre-trained language models (LMs), but the concrete effect is still unclear.

Continual Learning General Knowledge

Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias

1 code implementation29 Mar 2023 Zihan Liu, Yun Luo, Lirong Wu, Zicheng Liu, Stan Z. Li

It has become cognitive inertia to employ cross-entropy loss function in classification related tasks.

Improving (Dis)agreement Detection with Inductive Social Relation Information From Comment-Reply Interactions

1 code implementation8 Feb 2023 Yun Luo, Zihan Liu, Stan Z. Li, Yue Zhang

(Dis)agreement detection aims to identify the authors' attitudes or positions (\textit{{agree, disagree, neutral}}) towards a specific text.

Knowledge Graph Embedding Language Modelling +1

What Does the Gradient Tell When Attacking the Graph Structure

no code implementations26 Aug 2022 Zihan Liu, Ge Wang, Yun Luo, Stan Z. Li

To address this issue, we propose a novel surrogate model with multi-level propagation that preserves the node dissimilarity information.

Mere Contrastive Learning for Cross-Domain Sentiment Analysis

1 code implementation COLING 2022 Yun Luo, Fang Guo, Zihan Liu, Yue Zhang

Cross-domain sentiment analysis aims to predict the sentiment of texts in the target domain using the model trained on the source domain to cope with the scarcity of labeled data.

Contrastive Learning Sentence +1

Are Gradients on Graph Structure Reliable in Gray-box Attacks?

1 code implementation7 Aug 2022 Zihan Liu, Yun Luo, Lirong Wu, Siyuan Li, Zicheng Liu, Stan Z. Li

These errors arise from rough gradient usage due to the discreteness of the graph structure and from the unreliability in the meta-gradient on the graph structure.

Computational Efficiency

Challenges for Open-domain Targeted Sentiment Analysis

no code implementations14 Apr 2022 Yun Luo, Hongjie Cai, Linyi Yang, Yanxia Qin, Rui Xia, Yue Zhang

Since previous studies on open-domain targeted sentiment analysis are limited in dataset domain variety and sentence level, we propose a novel dataset consisting of 6, 013 human-labeled data to extend the data domains in topics of interest and document level.

Sentence Sentiment Analysis

Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks

no code implementations20 Oct 2021 Zihan Liu, Yun Luo, Zelin Zang, Stan Z. Li

Gray-box graph attacks aim at disrupting the performance of the victim model by using inconspicuous attacks with limited knowledge of the victim model.

Node Classification Representation Learning

PDAML: A Pseudo Domain Adaptation Paradigm for Subject-independent EEG-based Emotion Recognition

no code implementations29 Sep 2021 Yun Luo, Gengchen Wei, Bao-liang Lu

Usually, the DA methods give relatively promising results than the DG methods but require additional computation resources each time a new subject comes.

Domain Generalization EEG +2

Data Augmentation for Enhancing EEG-based Emotion Recognition with Deep Generative Models

no code implementations4 Jun 2020 Yun Luo, Li-Zhen Zhu, Zi-Yu Wan, Bao-liang Lu

Then, we augment the original training datasets with a different number of generated realistic-like EEG data.

Data Augmentation EEG +2

Real-World Image Datasets for Federated Learning

2 code implementations14 Oct 2019 Jiahuan Luo, Xueyang Wu, Yun Luo, Anbu Huang, Yun-Feng Huang, Yang Liu, Qiang Yang

Federated learning is a new machine learning paradigm which allows data parties to build machine learning models collaboratively while keeping their data secure and private.

BIG-bench Machine Learning Federated Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.