Search Results for author: Wanyun Cui

Found 17 papers, 5 papers with code

Cherry on Top: Parameter Heterogeneity and Quantization in Large Language Models

no code implementations3 Apr 2024 Wanyun Cui, Qianle Wang

We find that a small subset of ``cherry'' parameters exhibit a disproportionately large influence on model performance, while the vast majority of parameters have minimal impact.

Quantization

Who Said That? Benchmarking Social Media AI Detection

no code implementations12 Oct 2023 Wanyun Cui, Linqiu Zhang, Qianle Wang, Shuyang Cai

Addressing these challenges, this paper introduces SAID (Social media AI Detection), a novel benchmark developed to assess AI-text detection models' capabilities in real social media platforms.

Benchmarking Misinformation +1

Ada-Instruct: Adapting Instruction Generators for Complex Reasoning

1 code implementation6 Oct 2023 Wanyun Cui, Qianle Wang

Generating diverse and sophisticated instructions for downstream tasks by Large Language Models (LLMs) is pivotal for advancing the effect.

Code Completion Mathematical Reasoning

Evade ChatGPT Detectors via A Single Space

no code implementations5 Jul 2023 Shuyang Cai, Wanyun Cui

Existing detectors are built upon the assumption that there are distributional gaps between human-generated and AI-generated text.

Language Modelling

Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction

no code implementations24 May 2023 Wanyun Cui, Xingran Chen

Previous research has demonstrated that natural language explanations provide valuable inductive biases that guide models, thereby improving the generalization ability and data efficiency.

Computational Efficiency Relation +1

Free Lunch for Efficient Textual Commonsense Integration in Language Models

no code implementations24 May 2023 Wanyun Cui, Xingran Chen

One key observation is that the upper bound of batch partitioning can be reduced to the classic {\it graph k-cut problem}.

Instance-based Learning for Knowledge Base Completion

1 code implementation13 Nov 2022 Wanyun Cui, Xingran Chen

In this paper, we propose a new method for knowledge base completion (KBC): instance-based learning (IBL).

Knowledge Base Completion

Open Rule Induction

2 code implementations NeurIPS 2021 Wanyun Cui, Xingran Chen

One weakness of the previous rule induction systems is that they only find rules within a knowledge base (KB) and therefore cannot generalize to more open and complex real-world rules.

Language Modelling Relation Extraction

Isotonic Data Augmentation for Knowledge Distillation

no code implementations3 Jul 2021 Wanyun Cui, Sen Yan

However, we found critical order violations between hard labels and soft labels in augmented samples.

Attribute Data Augmentation +2

Attention over Phrases

no code implementations25 Sep 2019 Wanyun Cui

Besides representing the words of the sentence, we introduce hypernodes to represent the candidate phrases in attention.

Inductive Bias Sentence

Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and Noisy Data Refinement

no code implementations22 Aug 2019 Zhiqiang Shen, Zhankui He, Wanyun Cui, Jiahui Yu, Yutong Zheng, Chenchen Zhu, Marios Savvides

In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously.

Knowledge Distillation Missing Labels

KBQA: Learning Question Answering over QA Corpora and Knowledge Bases

no code implementations6 Mar 2019 Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, Wei Wang

Based on these templates, our QA system KBQA effectively supports binary factoid questions, as well as complex questions which are composed of a series of binary factoid questions.

Question Answering

Transfer Learning for Sequences via Learning to Collocate

no code implementations ICLR 2019 Wanyun Cui, Guangyu Zheng, Zhiqiang Shen, Sihang Jiang, Wei Wang

Transfer learning aims to solve the data sparsity for a target domain by applying information of the source domain.

NER POS +5

Verb Pattern: A Probabilistic Semantic Representation on Verbs

no code implementations20 Oct 2017 Wanyun Cui, Xiyou Zhou, Hangyu Lin, Yanghua Xiao, Haixun Wang, Seung-won Hwang, Wei Wang

In this paper, we introduce verb patterns to represent verbs' semantics, such that each pattern corresponds to a single semantic of the verb.

Specificity

Cannot find the paper you are looking for? You can Submit a new open access paper.