Search Results for author: Yibing Liu

Found 10 papers, 7 papers with code

Gradient-Congruity Guided Federated Sparse Training

no code implementations2 May 2024 Chris Xing Tian, Yibing Liu, Haoliang Li, Ray C. C. Cheung, Shiqi Wang

However, FL also faces challenges such as high computational and communication costs regarding resource-constrained devices, and poor generalization performance due to the heterogeneity of data across edge clients and the presence of out-of-distribution data.

Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization

1 code implementation5 Jun 2023 Yibing Liu, Chris Xing Tian, Haoliang Li, Lei Ma, Shiqi Wang

The out-of-distribution (OOD) problem generally arises when neural networks encounter data that significantly deviates from the training data distribution, i. e., in-distribution (InD).

Out-of-Distribution Detection

M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System

1 code implementation30 Jan 2023 Chenqi Kong, Kexin Zheng, Yibing Liu, Shiqi Wang, Anderson Rocha, Haoliang Li

Face presentation attacks (FPA), also known as face spoofing, have brought increasing concerns to the public through various malicious applications, such as financial fraud and privacy leakage.

Face Anti-Spoofing Face Recognition

Generalization Beyond Feature Alignment: Concept Activation-Guided Contrastive Learning

no code implementations13 Nov 2022 Yibing Liu, Chris Xing Tian, Haoliang Li, Shiqi Wang

Specifically, by treating feature elements as neuron activation states, we show that conventional alignment methods tend to deteriorate the diversity of learned invariant features, as they indiscriminately minimize all neuron activation differences.

Contrastive Learning Domain Generalization

A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA

1 code implementation30 Jun 2022 Yangyang Guo, Liqiang Nie, Yongkang Wong, Yibing Liu, Zhiyong Cheng, Mohan Kankanhalli

On the other hand, pertaining to the implicit knowledge, the multi-modal implicit knowledge for knowledge-based VQA still remains largely unexplored.

Question Answering Retrieval +1

Answer Questions with Right Image Regions: A Visual Attention Regularization Approach

1 code implementation3 Feb 2021 Yibing Liu, Yangyang Guo, Jianhua Yin, Xuemeng Song, Weifeng Liu, Liqiang Nie

However, recent studies have pointed out that the highlighted image regions from the visual attention are often irrelevant to the given question and answer, leading to model confusion for correct visual reasoning.

Question Answering Visual Grounding +2

Distilling Knowledge from Pre-trained Language Models via Text Smoothing

no code implementations8 May 2020 Xing Wu, Yibing Liu, Xiangyang Zhou, dianhai yu

As an alternative, we propose a new method for BERT distillation, i. e., asking the teacher to generate smoothed word ids, rather than labels, for teaching the student model in knowledge distillation.

Knowledge Distillation Language Modelling

D-NET: A Pre-Training and Fine-Tuning Framework for Improving the Generalization of Machine Reading Comprehension

1 code implementation WS 2019 Hongyu Li, Xiyuan Zhang, Yibing Liu, Yiming Zhang, Quan Wang, Xiangyang Zhou, Jing Liu, Hua Wu, Haifeng Wang

In this paper, we introduce a simple system Baidu submitted for MRQA (Machine Reading for Question Answering) 2019 Shared Task that focused on generalization of machine reading comprehension (MRC) models.

Machine Reading Comprehension Multi-Task Learning +1

Quantifying and Alleviating the Language Prior Problem in Visual Question Answering

1 code implementation13 May 2019 Yangyang Guo, Zhiyong Cheng, Liqiang Nie, Yibing Liu, Yinglong Wang, Mohan Kankanhalli

Benefiting from the advancement of computer vision, natural language processing and information retrieval techniques, visual question answering (VQA), which aims to answer questions about an image or a video, has received lots of attentions over the past few years.

Information Retrieval Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.