no code implementations • ICCV 2023 • Zhipeng Yu, Jiaheng Liu, Haoyu Qin, Yichao Wu, Kun Hu, Jiayi Tian, Ding Liang
Knowledge distillation is an effective model compression method to improve the performance of a lightweight student model by transferring the knowledge of a well-performed teacher model, which has been widely adopted in many computer vision tasks, including face recognition (FR).
no code implementations • 12 Apr 2022 • Jiaheng Liu, Haoyu Qin, Yichao Wu, Jinyang Guo, Ding Liang, Ke Xu
In this work, we observe that mutual relation knowledge between samples is also important to improve the discriminative ability of the learned representation of the student model, and propose an effective face recognition distillation method called CoupleFace by additionally introducing the Mutual Relation Distillation (MRD) into existing distillation framework.
1 code implementation • 6 Feb 2022 • Jiayang Bai, Shuichang Lai, Haoyu Qin, Jie Guo, Yanwen Guo
In this paper, we propose a learning-based method for predicting dense depth values of a scene from a monocular omnidirectional image.
Ranked #7 on Depth Estimation on Stanford2D3D Panoramic
no code implementations • 1 Jan 2021 • Bo Wu, Haoyu Qin, Alireza Zareian, Carl Vondrick, Shih-Fu Chang
Children acquire language subconsciously by observing the surrounding world and listening to descriptions.
no code implementations • 22 Jul 2020 • Bo Wu, Haoyu Qin, Alireza Zareian, Carl Vondrick, Shih-Fu Chang
Children acquire language subconsciously by observing the surrounding world and listening to descriptions.
no code implementations • 9 Feb 2020 • Haoyu Qin
In this paper, we propose an Asymmetric Rejection Loss, which aims at making full use of unlabeled images of those under-represented groups, to reduce the racial bias of face recognition models.