Search Results for author: Boxiao Liu

Found 15 papers, 2 papers with code

GeoMIM: Towards Better 3D Knowledge Transfer via Masked Image Modeling for Multi-view 3D Understanding

1 code implementation ICCV 2023 Jihao Liu, Tai Wang, Boxiao Liu, Qihang Zhang, Yu Liu, Hongsheng Li

In this paper, we propose Geometry Enhanced Masked Image Modeling (GeoMIM) to transfer the knowledge of the LiDAR model in a pretrain-finetune paradigm for improving the multi-view camera-based 3D detection.

3D Object Detection object-detection +1

Masked Autoencoders Are Stronger Knowledge Distillers

no code implementations ICCV 2023 Shanshan Lao, Guanglu Song, Boxiao Liu, Yu Liu, Yujiu Yang

In MKD, random patches of the input image are masked, and the corresponding missing feature is recovered by forcing it to imitate the output of the teacher.

Knowledge Distillation object-detection +2

UniKD: Universal Knowledge Distillation for Mimicking Homogeneous or Heterogeneous Object Detectors

no code implementations ICCV 2023 Shanshan Lao, Guanglu Song, Boxiao Liu, Yu Liu, Yujiu Yang

Bridging this semantic gap now requires case-by-case algorithm design which is time-consuming and heavily relies on experienced adjustment.

Knowledge Distillation

Rethinking Robust Representation Learning Under Fine-grained Noisy Faces

no code implementations8 Aug 2022 Bingqi Ma, Guanglu Song, Boxiao Liu, Yu Liu

To better understand this, we reformulate the noise type of each class in a more fine-grained manner as N-identities|K^C-clusters.

Face Recognition Representation Learning

TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers

1 code implementation18 Jul 2022 Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, Yu Liu

In this paper, we propose a novel data augmentation technique TokenMix to improve the performance of vision transformers.

Data Augmentation

Meta Knowledge Distillation

no code implementations16 Feb 2022 Jihao Liu, Boxiao Liu, Hongsheng Li, Yu Liu

Recent studies pointed out that knowledge distillation (KD) suffers from two degradation problems, the teacher-student gap and the incompatibility with strong data augmentations, making it not applicable to training state-of-the-art models, which are trained with advanced augmentations.

Data Augmentation Image Classification +1

An Efficient Pruning Process with Locality Aware Exploration and Dynamic Graph Editing for Subgraph Matching

no code implementations22 Dec 2021 Zite Jiang, Boxiao Liu, Shuai Zhang, Xingzhong Hou, Mengting Yuan, Haihang You

Subgraph matching is a NP-complete problem that extracts isomorphic embeddings of a query graph $q$ in a data graph $G$.

Rectifying the Data Bias in Knowledge Distillation

no code implementations ICCV 2021 Boxiao Liu, Shenghan Zhang, Guanglu Song, Haihang You, Yu Liu

In this paper, we first quantitatively define the uniformity of the sampled data for training, providing a unified view for methods that learn from biased data.

 Ranked #1 on Face Verification on IJB-C (training dataset metric)

Face Recognition Face Verification +3

Exploiting Knowledge Distillation for Few-Shot Image Generation

no code implementations29 Sep 2021 Xingzhong Hou, Boxiao Liu, Fang Wan, Haihang You

The existing pipeline is first pretraining a source model (which contains a generator and a discriminator) on a large-scale dataset and finetuning it on a target domain with limited samples.

Image Generation Knowledge Distillation +1

FNAS: Uncertainty-Aware Fast Neural Architecture Search

no code implementations25 May 2021 Jihao Liu, Ming Zhang, Yangting Sun, Boxiao Liu, Guanglu Song, Yu Liu, Hongsheng Li

Further, an architecture knowledge pool together with a block similarity function is proposed to utilize parameter knowledge and reduces the searching time by 2 times.

Fairness Neural Architecture Search +1

Switchable K-Class Hyperplanes for Noise-Robust Representation Learning

no code implementations ICCV 2021 Boxiao Liu, Guanglu Song, Manyuan Zhang, Haihang You, Yu Liu

When collaborated with the popular ArcFace on million-level data representation learning, we found that the switchable manner in SKH can effectively eliminate the gradient conflict generated by real-world label noise on a single K-class hyperplane.

Model Optimization Representation Learning +1

Fast MNAS: Uncertainty-aware Neural Architecture Search with Lifelong Learning

no code implementations1 Jan 2021 Jihao Liu, Yangting Sun, Ming Zhang, Boxiao Liu, Yu Liu

Further, a life-long knowledge pool together with a block similarity function is proposed to utilize the lifelong parameter knowledge and reduces the searching time by 2 times.

Fairness Neural Architecture Search

Utilizing the Instability in Weakly Supervised Object Detection

no code implementations14 Jun 2019 Yan Gao, Boxiao Liu, Nan Guo, Xiaochun Ye, Fang Wan, Haihang You, Dongrui Fan

Weakly supervised object detection (WSOD) focuses on training object detector with only image-level annotations, and is challenging due to the gap between the supervision and the objective.

Multiple Instance Learning Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.