1 code implementation • ECCV 2020 • Dahlia Urbach, Yizhak Ben-Shabat, Michael Lindenbaum
We introduce a new deep learning method for point cloud comparison.
no code implementations • 14 Aug 2023 • Oren Shrout, Ori Nitzan, Yizhak Ben-Shabat, Ayellet Tal
Accurately detecting objects in the environment is a key challenge for autonomous vehicles.
no code implementations • 18 Apr 2023 • Zheyu Zhuang, Yizhak Ben-Shabat, Jiahao Zhang, Stephen Gould, Robert Mahony
It is composed of a visual servoing module that reaches and grasps assembly parts in an unstructured multi-instance and dynamic environment, an action recognition module that performs human action prediction for implicit communication, and a visual handover module that uses the perceptual understanding of human behaviour to produce an intuitive and efficient collaborative assembly experience.
1 code implementation • CVPR 2023 • Jiahao Zhang, Anoop Cherian, Yanbin Liu, Yizhak Ben-Shabat, Cristian Rodriguez, Stephen Gould
In this paper, we consider a novel setting where such an alignment is between (i) instruction steps that are depicted as assembly diagrams (commonly seen in Ikea assembly manuals) and (ii) video segments from in-the-wild videos; these videos comprising an enactment of the assembly actions in the real world.
1 code implementation • 11 Mar 2023 • Yizhak Ben-Shabat, Oren Shrout, Stephen Gould
We propose a novel method for 3D point cloud action recognition.
1 code implementation • CVPR 2023 • Chamin Hewa Koneputugodage, Yizhak Ben-Shabat, Stephen Gould
We propose a two-step approach, OG-INR, where we (1) construct a discrete octree and label what is inside and outside (2) optimize for a continuous and high-fidelity shape using an INR that is initially guided by the octree's labelling.
1 code implementation • CVPR 2023 • Oren Shrout, Yizhak Ben-Shabat, Ayellet Tal
3D object detection within large 3D scenes is challenging not only due to the sparsity and irregularity of 3D point clouds, but also due to both the extreme foreground-background scene imbalance and class imbalance.
1 code implementation • CVPR 2022 • Yizhak Ben-Shabat, Chamin Hewa Koneputugodage, Stephen Gould
In this paper, we propose a divergence guided shape representation learning approach that does not require normal vectors as input.
1 code implementation • 2 Dec 2021 • Adi Mesika, Yizhak Ben-Shabat, Ayellet Tal
This work presents a different approach for representing and learning the shape from a given point set.
1 code implementation • 21 Jun 2021 • Yizhak Ben-Shabat, Chamin Hewa Koneputugodage, Stephen Gould
In this paper, we propose a divergence guided shape representation learning approach that does not require normal vectors as input.
1 code implementation • 1 Jul 2020 • Yizhak Ben-Shabat, Xin Yu, Fatemeh Sadat Saleh, Dylan Campbell, Cristian Rodriguez-Opazo, Hongdong Li, Stephen Gould
The availability of a large labeled dataset is a key requirement for applying deep learning methods to solve various computer vision tasks.
1 code implementation • 24 Apr 2020 • Dahlia Urbach, Yizhak Ben-Shabat, Michael Lindenbaum
We introduce a new deep learning method for point cloud comparison.
1 code implementation • ECCV 2020 • Yizhak Ben-Shabat, Stephen Gould
We propose a surface fitting method for unstructured 3D point clouds.
Ranked #6 on Surface Normals Estimation on PCPNet
1 code implementation • CVPR 2019 • Yizhak Ben-Shabat, Michael Lindenbaum, Anath Fischer
In this paper, we propose a normal estimation method for unstructured 3D point clouds.
Ranked #8 on Surface Normals Estimation on PCPNet
3 code implementations • 22 Nov 2017 • Yizhak Ben-Shabat, Michael Lindenbaum, Anath Fischer
The point cloud is gaining prominence as a method for representing 3D shapes, but its irregular format poses a challenge for deep learning methods.
Ranked #57 on 3D Part Segmentation on ShapeNet-Part
no code implementations • 14 Feb 2017 • Yizhak Ben-Shabat, Tamar Avraham, Michael Lindenbaum, Anath Fischer
This 3D information introduces a new conceptual change that can be utilized to improve the results of over-segmentation, which uses mainly color information, and to generate clusters of points we call super-points.