Search Results for author: Haozhi Cao

Found 15 papers, 3 papers with code

Reliable Spatial-Temporal Voxels For Multi-Modal Test-Time Adaptation

no code implementations11 Mar 2024 Haozhi Cao, Yuecong Xu, Jianfei Yang, Pengyu Yin, Xingyu Ji, Shenghai Yuan, Lihua Xie

Multi-modal test-time adaptation (MM-TTA) is proposed to adapt models to an unlabeled target domain by leveraging the complementary multi-modal inputs in an online manner.

Test-time Adaptation

Video Unsupervised Domain Adaptation with Deep Learning: A Comprehensive Survey

no code implementations17 Nov 2022 Yuecong Xu, Haozhi Cao, Zhenghua Chen, XiaoLi Li, Lihua Xie, Jianfei Yang

To tackle performance degradation and address concerns in high video annotation cost uniformly, the video unsupervised domain adaptation (VUDA) is introduced to adapt video models from the labeled source domain to the unlabeled target domain by alleviating video domain shift, improving the generalizability and portability of video models.

Action Recognition Unsupervised Domain Adaptation

Leveraging Endo- and Exo-Temporal Regularization for Black-box Video Domain Adaptation

no code implementations10 Aug 2022 Yuecong Xu, Jianfei Yang, Haozhi Cao, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen

To enable video models to be applied seamlessly across video tasks in different environments, various Video Unsupervised Domain Adaptation (VUDA) methods have been proposed to improve the robustness and transferability of video models.

Action Recognition Unsupervised Domain Adaptation

Source-free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition

1 code implementation9 Mar 2022 Yuecong Xu, Jianfei Yang, Haozhi Cao, Keyu Wu, Wu Min, Zhenghua Chen

Video-based Unsupervised Domain Adaptation (VUDA) methods improve the robustness of video models, enabling them to be applied to action recognition tasks across different environments.

Action Recognition Source-Free Domain Adaptation +1

Going Deeper into Recognizing Actions in Dark Environments: A Comprehensive Benchmark Study

no code implementations19 Feb 2022 Yuecong Xu, Jianfei Yang, Haozhi Cao, Jianxiong Yin, Zhenghua Chen, XiaoLi Li, Zhengguo Li, Qianwen Xu

While action recognition (AR) has gained large improvements with the introduction of large-scale video datasets and the development of deep neural networks, AR models robust to challenging environments in real-world scenarios are still under-explored.

Action Recognition Autonomous Driving

Self-Supervised Video Representation Learning by Video Incoherence Detection

no code implementations26 Sep 2021 Haozhi Cao, Yuecong Xu, Jianfei Yang, Kezhi Mao, Lihua Xie, Jianxiong Yin, Simon See

This paper introduces a novel self-supervised method that leverages incoherence detection for video representation learning.

Action Recognition Contrastive Learning +3

Multi-Source Video Domain Adaptation with Temporal Attentive Moment Alignment

no code implementations21 Sep 2021 Yuecong Xu, Jianfei Yang, Haozhi Cao, Keyu Wu, Min Wu, Rui Zhao, Zhenghua Chen

Multi-Source Domain Adaptation (MSDA) is a more practical domain adaptation scenario in real-world scenarios.

Unsupervised Domain Adaptation

Partial Video Domain Adaptation with Partial Adversarial Temporal Attentive Network

no code implementations ICCV 2021 Yuecong Xu, Jianfei Yang, Haozhi Cao, Qi Li, Kezhi Mao, Zhenghua Chen

For videos, such negative transfer could be triggered by both spatial and temporal features, which leads to a more challenging Partial Video Domain Adaptation (PVDA) problem.

Partial Domain Adaptation

PNL: Efficient Long-Range Dependencies Extraction with Pyramid Non-Local Module for Action Recognition

no code implementations9 Jun 2020 Yuecong Xu, Haozhi Cao, Jianfei Yang, Kezhi Mao, Jianxiong Yin, Simon See

Empirical results prove the effectiveness and efficiency of our PNL module, which achieves state-of-the-art performance of 83. 09% on the Mini-Kinetics dataset, with decreased computation cost compared to the non-local block.

Action Recognition

ARID: A New Dataset for Recognizing Action in the Dark

1 code implementation6 Jun 2020 Yuecong Xu, Jianfei Yang, Haozhi Cao, Kezhi Mao, Jianxiong Yin, Simon See

We bridge the gap of the lack of data for this task by collecting a new dataset: the Action Recognition in the Dark (ARID) dataset.

Action Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.