Search Results for author: Jianhua Xu

Found 6 papers, 1 papers with code

OpenSight: A Simple Open-Vocabulary Framework for LiDAR-Based Object Detection

no code implementations12 Dec 2023 Hu Zhang, Jianhua Xu, Tao Tang, Haiyang Sun, Xin Yu, Zi Huang, Kaicheng Yu

OpenSight utilizes 2D-3D geometric priors for the initial discernment and localization of generic objects, followed by a more specific semantic interpretation of the detected objects.

object-detection Object Detection

DMKD: Improving Feature-based Knowledge Distillation for Object Detection Via Dual Masking Augmentation

no code implementations6 Sep 2023 Guang Yang, Yin Tang, Zhijian Wu, Jun Li, Jianhua Xu, Xili Wan

Recent mainstream masked distillation methods function by reconstructing selectively masked areas of a student network from the feature map of its teacher counterpart.

Knowledge Distillation object-detection +1

EfficientSRFace: An Efficient Network with Super-Resolution Enhancement for Accurate Face Detection

no code implementations4 Jun 2023 Guangtao Wang, Jun Li, Jie Xie, Jianhua Xu, Bo Yang

In face detection, low-resolution faces, such as numerous small faces of a human group in a crowded scene, are common in dense face prediction tasks.

Benchmarking Face Detection +1

LogoNet: a fine-grained network for instance-level logo sketch retrieval

1 code implementation5 Apr 2023 Binbin Feng, Jun Li, Jianhua Xu

To our knowledge, this is the first publicly available instance-level logo sketch dataset.

2k Benchmarking +2

EfficientFace: An Efficient Deep Network with Feature Enhancement for Accurate Face Detection

no code implementations23 Feb 2023 Guangtao Wang, Jun Li, Zhijian Wu, Jianhua Xu, Jifeng Shen, Wankou Yang

Besides, this is conducive to estimating the locations of faces and enhancing the descriptive power of face features.

Descriptive Face Detection

AMD: Adaptive Masked Distillation for Object Detection

no code implementations31 Jan 2023 Guang Yang, Yin Tang, Jun Li, Jianhua Xu, Xili Wan

As a general model compression paradigm, feature-based knowledge distillation allows the student model to learn expressive features from the teacher counterpart.

Knowledge Distillation Model Compression +3

Cannot find the paper you are looking for? You can Submit a new open access paper.