Search Results for author: Jilin Mei

Found 8 papers, 3 papers with code

PA&DA: Jointly Sampling PAth and DAta for Consistent NAS

1 code implementation CVPR 2023 Shun Lu, Yu Hu, Longxing Yang, Zihao Sun, Jilin Mei, Jianchao Tan, Chengru Song

Our method only requires negligible computation cost for optimizing the sampling distributions of path and data, but achieves lower gradient variance during supernet training and better generalization performance for the supernet, resulting in a more consistent NAS.

Few-shot 3D LiDAR Semantic Segmentation for Autonomous Driving

no code implementations17 Feb 2023 Jilin Mei, Junbao Zhou, Yu Hu

Thus, we propose a few-shot 3D LiDAR semantic segmentation method that predicts both novel classes and base classes simultaneously.

Autonomous Driving Generalized Few-Shot Semantic Segmentation +4

AGNAS: Attention-Guided Micro- and Macro-Architecture Search

1 code implementation International Conference on Machine Learning 2022 Zihao Sun, Yu Hu, Shun Lu, Longxing Yang, Jilin Mei, Yinhe Han, Xiaowei Li

We utilize the attention weights to represent the importance of the relevant operations for the micro search or the importance of the relevant blocks for the macro search.

Neural Architecture Search

Incorporating Human Domain Knowledge in 3D LiDAR-based Semantic Segmentation

no code implementations23 May 2019 Jilin Mei, Huijing Zhao

We propose a new method that makes full use of the advantages of traditional methods and deep learning methods via incorporating human domain knowledge into the neural network model to reduce the demand for large numbers of manual annotations and improve the training efficiency.

Semantic Segmentation

Semantic Segmentation of 3D LiDAR Data in Dynamic Scene Using Semi-supervised Learning

no code implementations3 Sep 2018 Jilin Mei, Biao Gao, Donghao Xu, Wen Yao, Xijun Zhao, Huijing Zhao

This work studies the semantic segmentation of 3D LiDAR data in dynamic scenes for autonomous driving applications.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.