Search Results for author: Chaoping Tu

Found 4 papers, 0 papers with code

Revisiting Multi-modal 3D Semantic Segmentation in Real-world Autonomous Driving

no code implementations13 Oct 2023 Feng Jiang, Chaoping Tu, Gang Zhang, Jun Li, Hanqing Huang, Junyu Lin, Di Feng, Jian Pu

LiDAR and camera are two critical sensors for multi-modal 3D semantic segmentation and are supposed to be fused efficiently and robustly to promise safety in various real-world scenarios.

3D Semantic Segmentation Autonomous Driving +1

VIDEO AFFECTIVE IMPACT PREDICTION WITH MULTIMODAL FUSION AND LONG-SHORT TEMPORAL CONTEXT

no code implementations25 Sep 2019 Yin Zhao, Longjun Cai, Chaoping Tu, Jie Zhang, Wu Wei

Feature extraction, multi-modal fusion and temporal context fusion are crucial stages for predicting valence and arousal values in the emotional impact, but have not been successfully exploited.

Video Affective Effects Prediction with Multi-modal Fusion and Shot-Long Temporal Context

no code implementations1 Sep 2019 Jie Zhang, Yin Zhao, Longjun Cai, Chaoping Tu, Wu Wei

We select the most suitable modalities for valence and arousal tasks respectively and each modal feature is extracted using the modality-specific pre-trained deep model on large generic dataset.

Cannot find the paper you are looking for? You can Submit a new open access paper.