no code implementations • 24 Aug 2023 • Wanyue Zhang, Rishabh Dabral, Thomas Leimkühler, Vladislav Golyanik, Marc Habermann, Christian Theobalt
Given an unseen object and a reference pose-object pair, we optimise for the object-aware pose that is closest in the feature space to the reference pose.
1 code implementation • 30 May 2022 • Wenyu Zhang, Li Shen, Wanyue Zhang, Chuan-Sheng Foo
Recent test-time adaptation methods update batch normalization layers of pre-trained source models deployed in new target environments with streaming data to mitigate such performance degradation.
no code implementations • 6 May 2022 • Xun Xu, Jingyi Liao, Lile Cai, Manh Cuong Nguyen, Kangkang Lu, Wanyue Zhang, Yasin Yazici, Chuan Sheng Foo
Recent studies combined finetuning (FT) from pretrained weights with SSL to mitigate the challenges and claimed superior results in the low-label regime.
no code implementations • 2 May 2022 • Xian Shi, Xun Xu, Wanyue Zhang, Xiatian Zhu, Chuan Sheng Foo, Kui Jia
We also demonstrate the feasibility of a more efficient training strategy.
no code implementations • CVPR 2022 • Hehe Fan, Xiaojun Chang, Wanyue Zhang, Yi Cheng, Ying Sun, Mohan Kankanhalli
In this paper, we propose an unsupervised domain adaptation method for deep point cloud representation learning.
1 code implementation • 11 Dec 2021 • Wanyue Zhang, Xun Xu, Fayao Liu, Chuan-Sheng Foo
Data augmentation is an important technique to reduce overfitting and improve learning performance, but existing works on data augmentation for 3D point cloud data are based on heuristics.
Ranked #1 on 3D Point Cloud Data Augmentation on ModelNet40
no code implementations • 29 Sep 2021 • Wenyu Zhang, Li Shen, Chuan-Sheng Foo, Wanyue Zhang
Test-time adaptation of pre-trained source models with streaming unlabelled target data is an attractive setting that protects the privacy of source data, but it has mini-batch size and class-distribution requirements on the streaming data which might not be desirable in practice.
no code implementations • ECCV 2020 • Rui Huang, Wanyue Zhang, Abhijit Kundu, Caroline Pantofaru, David A. Ross, Thomas Funkhouser, Alireza Fathi
We use a U-Net style 3D sparse convolution network to extract features for each frame's LiDAR point-cloud.