Search Results for author: Daniel Harari

Found 11 papers, 2 papers with code

Why Not Use Your Textbook? Knowledge-Enhanced Procedure Planning of Instructional Videos

2 code implementations5 Mar 2024 Kumaranage Ravindu Yasas Nagasinghe, Honglu Zhou, Malitha Gunawardhana, Martin Renqiang Min, Daniel Harari, Muhammad Haris Khan

This knowledge, sourced from training procedure plans and structured as a directed weighted graph, equips the agent to better navigate the complexities of step sequencing and its potential variations.

Logical Sequence Navigate

A model for full local image interpretation

no code implementations17 Oct 2021 Guy Ben-Yosef, Liav Assif, Daniel Harari, Shimon Ullman

We describe a computational model of humans' ability to provide a detailed interpretation of components in a scene.

Emergent Neural Network Mechanisms for Generalization to Objects in Novel Orientations

no code implementations28 Sep 2021 Avi Cooper, Xavier Boix, Daniel Harari, Spandan Madan, Hanspeter Pfister, Tomotake Sasaki, Pawan Sinha

The capability of Deep Neural Networks (DNNs) to recognize objects in orientations outside the distribution of the training data is not well understood.

Using Motion and Internal Supervision in Object Recognition

no code implementations13 Dec 2018 Daniel Harari

These two aspects are inter-related in the current study, since image motion is used for internal supervision, via the detection of spatiotemporal events of active-motion and the use of tracking.

Motion Segmentation Object +1

Discovery and usage of joint attention in images

no code implementations10 Apr 2018 Daniel Harari, Joshua B. Tenenbaum, Shimon Ullman

Second, we use a human study to demonstrate the sensitivity of humans to joint attention, suggesting that the detection of such a configuration in an image can be useful for understanding the image, including the goals of the agents and their joint activity, and therefore can contribute to image captioning and related tasks.

Image Captioning

Large Field and High Resolution: Detecting Needle in Haystack

no code implementations10 Apr 2018 Hadar Gorodissky, Daniel Harari, Shimon Ullman

The growing use of convolutional neural networks (CNN) for a broad range of visual tasks, including tasks involving fine details, raises the problem of applying such networks to a large field of view, since the amount of computations increases significantly with the number of pixels.

Vocal Bursts Intensity Prediction

Measuring and modeling the perception of natural and unconstrained gaze in humans and machines

no code implementations29 Nov 2016 Daniel Harari, Tao Gao, Nancy Kanwisher, Joshua Tenenbaum, Shimon Ullman

How accurate are humans in determining the gaze direction of others in lifelike scenes, when they can move their heads and eyes freely, and what are the sources of information for the underlying perceptual processes?

Discovering containment: from infants to machines

no code implementations30 Oct 2016 Shimon Ullman, Nimrod Dorfman, Daniel Harari

Current artificial learning systems can recognize thousands of visual categories, or play Go at a champion"s level, but cannot explain infants learning, in particular the ability to learn complex concepts without guidance, in a specific order.

Do You See What I Mean? Visual Resolution of Linguistic Ambiguities

no code implementations EMNLP 2015 Yevgeni Berzak, Andrei Barbu, Daniel Harari, Boris Katz, Shimon Ullman

Understanding language goes hand in hand with the ability to integrate complex contextual information obtained via perception.

Sentence

When Computer Vision Gazes at Cognition

1 code implementation8 Dec 2014 Tao Gao, Daniel Harari, Joshua Tenenbaum, Shimon Ullman

(1) Human accuracy of discriminating targets 8{\deg}-10{\deg} of visual angle apart is around 40% in a free looking gaze task; (2) The ability to interpret gaze of different lookers vary dramatically; (3) This variance can be captured by the computational model; (4) Human outperforms the current model significantly.

Task 2

Cannot find the paper you are looking for? You can Submit a new open access paper.