Search Results for author: Yong-Lu Li

Found 31 papers, 22 papers with code

Primitive-based 3D Human-Object Interaction Modelling and Programming

no code implementations17 Dec 2023 SiQi Liu, Yong-Lu Li, Zhou Fang, Xinpeng Liu, Yang You, Cewu Lu

To explore an effective embedding of HAOI for the machine, we build a new benchmark on 3D HAOI consisting of primitives together with their images and propose a task requiring machines to recover 3D HAOI using primitives from images.

3D Reconstruction Human-Object Interaction Detection +2

Revisit Human-Scene Interaction via Space Occupancy

no code implementations5 Dec 2023 Xinpeng Liu, Haowen Hou, Yanchao Yang, Yong-Lu Li, Cewu Lu

Human-scene Interaction (HSI) generation is a challenging task and crucial for various downstream tasks.

Dancing with Still Images: Video Distillation via Static-Dynamic Disentanglement

1 code implementation1 Dec 2023 Ziyu Wang, Yue Xu, Cewu Lu, Yong-Lu Li

It first distills the videos into still images as static memory and then compensates the dynamic and motion information with a learnable dynamic memory block.

Disentanglement

Symbol-LLM: Leverage Language Models for Symbolic System in Visual Human Activity Reasoning

no code implementations NeurIPS 2023 Xiaoqian Wu, Yong-Lu Li, Jianhua Sun, Cewu Lu

One possible path of activity reasoning is building a symbolic system composed of symbols and rules, where one rule connects multiple symbols, implying human knowledge and reasoning abilities.

Bridging the Gap between Human Motion and Action Semantics via Kinematic Phrases

no code implementations6 Oct 2023 Xinpeng Liu, Yong-Lu Li, Ailing Zeng, Zizheng Zhou, Yang You, Cewu Lu

The goal of motion understanding is to establish a reliable mapping between motion and action semantics, while it is a challenging many-to-many problem.

EgoPCA: A New Framework for Egocentric Hand-Object Interaction Understanding

no code implementations ICCV 2023 Yue Xu, Yong-Lu Li, Zhemin Huang, Michael Xu Liu, Cewu Lu, Yu-Wing Tai, Chi-Keung Tang

With the surge in attention to Egocentric Hand-Object Interaction (Ego-HOI), large-scale datasets such as Ego4D and EPIC-KITCHENS have been proposed.

Action Recognition Temporal Action Localization

Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection

1 code implementation28 May 2023 Yue Xu, Yong-Lu Li, Kaitong Cui, Ziyu Wang, Cewu Lu, Yu-Wing Tai, Chi-Keung Tang

Our method consistently enhances the distillation algorithms, even on much larger-scale and more heterogeneous datasets, e. g. ImageNet-1K and Kinetics-400.

From Isolated Islands to Pangea: Unifying Semantic Space for Human Action Understanding

no code implementations2 Apr 2023 Yong-Lu Li, Xiaoqian Wu, Xinpeng Liu, Zehao Wang, Yiming Dou, Yikun Ji, Junyi Zhang, Yixing Li, Jingru Tan, Xudong Lu, Cewu Lu

By aligning the classes of previous datasets to our semantic space, we gather (image/video/skeleton/MoCap) datasets into a unified database in a unified label system, i. e., bridging "isolated islands" into a "Pangea".

Action Understanding Transfer Learning

Beyond Object Recognition: A New Benchmark towards Object Concept Learning

no code implementations ICCV 2023 Yong-Lu Li, Yue Xu, Xinyu Xu, Xiaohan Mao, Yuan YAO, SiQi Liu, Cewu Lu

To support OCL, we build a densely annotated knowledge base including extensive labels for three levels of object concept (category, attribute, affordance), and the causal relations of three levels.

Attribute Object +1

Constructing Balance from Imbalance for Long-tailed Image Recognition

1 code implementation4 Aug 2022 Yue Xu, Yong-Lu Li, Jiefeng Li, Cewu Lu

Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design, etc.

Mining Cross-Person Cues for Body-Part Interactiveness Learning in HOI Detection

1 code implementation28 Jul 2022 Xiaoqian Wu, Yong-Lu Li, Xinpeng Liu, Junyi Zhang, Yuzhe Wu, Cewu Lu

Though significant progress has been made, interactiveness learning remains a challenging problem in HOI detection: existing methods usually generate redundant negative H-O pair proposals and fail to effectively extract interactive pairs.

Human-Object Interaction Detection

Learning to Anticipate Future with Dynamic Context Removal

1 code implementation CVPR 2022 Xinyu Xu, Yong-Lu Li, Cewu Lu

Anticipating future events is an essential feature for intelligent systems and embodied AI.

Highlighting Object Category Immunity for the Generalization of Human-Object Interaction Detection

1 code implementation19 Feb 2022 Xinpeng Liu, Yong-Lu Li, Cewu Lu

To achieve OC-immunity, we propose an OC-immune network that decouples the inputs from OC, extracts OC-immune representations, and leverages uncertainty quantification to generalize to unseen objects.

Human-Object Interaction Detection Object +1

HAKE: A Knowledge Engine Foundation for Human Activity Understanding

3 code implementations14 Feb 2022 Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Yizhuo Li, Zuoyu Qiu, Liang Xu, Yue Xu, Hao-Shu Fang, Cewu Lu

Human activity understanding is of widespread interest in artificial intelligence and spans diverse applications like health care and behavior analysis.

Action Recognition Human-Object Interaction Detection +2

Human Trajectory Prediction With Momentary Observation

no code implementations CVPR 2022 Jianhua Sun, YuXuan Li, Liang Chai, Hao-Shu Fang, Yong-Lu Li, Cewu Lu

Human trajectory prediction task aims to analyze human future movements given their past status, which is a crucial step for many autonomous systems such as self-driving cars and social robots.

Self-Driving Cars Trajectory Prediction

Localization with Sampling-Argmax

1 code implementation NeurIPS 2021 Jiefeng Li, Tong Chen, Ruiqi Shi, Yujing Lou, Yong-Lu Li, Cewu Lu

In this work, we propose sampling-argmax, a differentiable training method that imposes implicit constraints to the shape of the probability map by minimizing the expectation of the localization error.

3D Human Pose Estimation

Learning Single/Multi-Attribute of Object with Symmetry and Group

1 code implementation9 Oct 2021 Yong-Lu Li, Yue Xu, Xinyu Xu, Xiaohan Mao, Cewu Lu

To model the compositional nature of these concepts, it is a good choice to learn them as transformations, e. g., coupling and decoupling.

Attribute Compositional Zero-Shot Learning

Symmetry and Group in Attribute-Object Compositions

1 code implementation CVPR 2020 Yong-Lu Li, Yue Xu, Xiaohan Mao, Cewu Lu

To model the compositional nature of these general concepts, it is a good choice to learn them through transformations, such as coupling and decoupling.

 Ranked #1 on Compositional Zero-Shot Learning on MIT-States (Top-1 accuracy % metric)

Attribute Compositional Zero-Shot Learning +1

InstaBoost: Boosting Instance Segmentation via Probability Map Guided Copy-Pasting

3 code implementations ICCV 2019 Hao-Shu Fang, Jianhua Sun, Runzhong Wang, Minghao Gou, Yong-Lu Li, Cewu Lu

With the guidance of such map, we boost the performance of R101-Mask R-CNN on instance segmentation from 35. 7 mAP to 37. 9 mAP without modifying the backbone or network structure.

Data Augmentation Instance Segmentation +3

HAKE: Human Activity Knowledge Engine

4 code implementations13 Apr 2019 Yong-Lu Li, Liang Xu, Xinpeng Liu, Xijie Huang, Yue Xu, Mingyang Chen, Ze Ma, Shiyi Wang, Hao-Shu Fang, Cewu Lu

To address these and promote the activity understanding, we build a large-scale Human Activity Knowledge Engine (HAKE) based on the human body part states.

Ranked #2 on Human-Object Interaction Detection on HICO (using extra training data)

Action Detection Human-Object Interaction Detection +1

Transferable Interactiveness Knowledge for Human-Object Interaction Detection

3 code implementations CVPR 2019 Yong-Lu Li, Siyuan Zhou, Xijie Huang, Liang Xu, Ze Ma, Hao-Shu Fang, Yan-Feng Wang, Cewu Lu

On account of the generalization of interactiveness, interactiveness network is a transferable knowledge learner and can be cooperated with any HOI detection models to achieve desirable results.

Human-Object Interaction Detection Object

Cannot find the paper you are looking for? You can Submit a new open access paper.