Search Results for author: Shmuel Peleg

Found 22 papers, 5 papers with code

GEFF: Improving Any Clothes-Changing Person ReID Model using Gallery Enrichment with Face Features

1 code implementation24 Nov 2022 Daniel Arkushin, Bar Cohen, Shmuel Peleg, Ohad Fried

Combined with the latest ReID models, our method achieves new SOTA results on the PRCC, LTCC, CCVID, LaST and VC-Clothes benchmarks and the proposed 42Street dataset.

Person Re-Identification

Deep Audio Waveform Prior

1 code implementation21 Jul 2022 Arnon Turetzky, Tzvi Michelson, Yossi Adi, Shmuel Peleg

A network with relevant deep priors is likely to generate a cleaner version of the signal before converging on the corrupted signal.

Audio inpainting Audio Source Separation +2

A Peek at Peak Emotion Recognition

no code implementations19 May 2022 Tzvi Michelson, Hillel Aviezer, Shmuel Peleg

Despite much progress in the field of facial expression recognition, little attention has been paid to the recognition of peak emotion.

Emotion Recognition Facial Expression Recognition +1

Membership Inference Attacks are Easier on Difficult Problems

1 code implementation ICCV 2021 Avital Shafran, Shmuel Peleg, Yedid Hoshen

Membership inference attacks (MIA) try to detect if data samples were used to train a neural network model, e. g. to detect copyright abuses.

Image Segmentation Medical Image Segmentation +4

Crypto-Oriented Neural Architecture Design

1 code implementation27 Nov 2019 Avital Shafran, Gil Segev, Shmuel Peleg, Yedid Hoshen

As neural networks revolutionize many applications, significant privacy conflicts between model users and providers emerge.

Dynamic Temporal Alignment of Speech to Lips

1 code implementation19 Aug 2018 Tavi Halperin, Ariel Ephrat, Shmuel Peleg

This alignment is based on deep audio-visual features, mapping the lips video and the speech signal to a shared representation.

Constrained Lip-synchronization Video Alignment

Visual Speech Enhancement

no code implementations23 Nov 2017 Aviv Gabbay, Asaph Shamir, Shmuel Peleg

When video is shot in noisy environment, the voice of a speaker seen in the video can be enhanced using the visible mouth movements, reducing background noise.

Lipreading Speech Enhancement

Seeing Through Noise: Visually Driven Speaker Separation and Enhancement

no code implementations22 Aug 2017 Aviv Gabbay, Ariel Ephrat, Tavi Halperin, Shmuel Peleg

Isolating the voice of a specific person while filtering out other voices or background noises is challenging when video is shot in noisy environments.

Speaker Separation

Improved Speech Reconstruction from Silent Video

no code implementations1 Aug 2017 Ariel Ephrat, Tavi Halperin, Shmuel Peleg

Speechreading is the task of inferring phonetic information from visually observed articulatory facial movements, and is a notoriously difficult task for humans to perform.

Vid2speech: Speech Reconstruction from Silent Video

no code implementations2 Jan 2017 Ariel Ephrat, Shmuel Peleg

Speechreading is a notoriously difficult task for humans to perform.

Fundamental Matrices from Moving Objects Using Line Motion Barcodes

no code implementations26 Jul 2016 Yoni Kasten, Gil Ben-Artzi, Shmuel Peleg, Michael Werman

Corresponding epipolar lines have similar motion barcodes, and candidate pairs of corresponding epipoar lines are found by the similarity of their motion barcodes.

EgoSampling: Wide View Hyperlapse from Egocentric Videos

no code implementations26 Apr 2016 Tavi Halperin, Yair Poleg, Chetan Arora, Shmuel Peleg

However, this accentuates the shake caused by natural head motion in an egocentric video, making the fast forwarded video useless.

Epipolar Geometry Based On Line Similarity

no code implementations17 Apr 2016 Gil Ben-Artzi, Tavi Halperin, Michael Werman, Shmuel Peleg

This paper proposes a similarity measure between lines that indicates whether two lines are corresponding epipolar lines and enables finding epipolar line correspondences as needed for the computation of epipolar geometry.

Stereo Matching Stereo Matching Hand

Camera Calibration from Dynamic Silhouettes Using Motion Barcodes

no code implementations CVPR 2016 Gil Ben-Artzi, Yoni Kasten, Shmuel Peleg, Michael Werman

The use of motion barcodes leads to increased speed, accuracy, and robustness in computing the epipolar geometry.

Camera Calibration

Visual Learning of Arithmetic Operations

no code implementations7 Jun 2015 Yedid Hoshen, Shmuel Peleg

This indicates that while some tasks may be easily learnable end-to-end, other may need to be broken into sub-tasks.

Live Video Synopsis for Multiple Cameras

no code implementations20 May 2015 Yedid Hoshen, Shmuel Peleg

Video surveillance cameras generate most of recorded video, and there is far more recorded video than operators can watch.

Decision Making

Compact CNN for Indexing Egocentric Videos

no code implementations28 Apr 2015 Yair Poleg, Ariel Ephrat, Shmuel Peleg, Chetan Arora

Furthermore, our CNN is able to recognize whether a video is egocentric or not with 99. 2% accuracy, up by 24% from current state-of-the-art.

Activity Recognition Optical Flow Estimation

EgoSampling: Fast-Forward and Stereo for Egocentric Videos

no code implementations CVPR 2015 Yair Poleg, Tavi Halperin, Chetan Arora, Shmuel Peleg

While egocentric cameras like GoPro are gaining popularity, the videos they capture are long, boring, and difficult to watch from start to end.

Event Retrieval Using Motion Barcodes

no code implementations3 Dec 2014 Gil Ben-Artzi, Michael Werman, Shmuel Peleg

We introduce a simple and effective method for retrieval of videos showing a specific event, even when the videos of that event were captured from significantly different viewpoints.

Retrieval

An Egocentric Look at Video Photographer Identity

no code implementations CVPR 2016 Yedid Hoshen, Shmuel Peleg

As head-worn cameras do not capture the photographer, it may seem that the anonymity of the photographer is preserved even when the video is publicly distributed.

Temporal Segmentation of Egocentric Videos

no code implementations CVPR 2014 Yair Poleg, Chetan Arora, Shmuel Peleg

Two sources of information for video segmentation are (i) the motion of the camera wearer, and (ii) the objects and activities recorded in the video.

Segmentation Video Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.