2 code implementations • 5 Mar 2024 • Kumaranage Ravindu Yasas Nagasinghe, Honglu Zhou, Malitha Gunawardhana, Martin Renqiang Min, Daniel Harari, Muhammad Haris Khan
This knowledge, sourced from training procedure plans and structured as a directed weighted graph, equips the agent to better navigate the complexities of step sequencing and its potential variations.
no code implementations • 17 Oct 2021 • Guy Ben-Yosef, Liav Assif, Daniel Harari, Shimon Ullman
We describe a computational model of humans' ability to provide a detailed interpretation of components in a scene.
no code implementations • 28 Sep 2021 • Avi Cooper, Xavier Boix, Daniel Harari, Spandan Madan, Hanspeter Pfister, Tomotake Sasaki, Pawan Sinha
The capability of Deep Neural Networks (DNNs) to recognize objects in orientations outside the distribution of the training data is not well understood.
no code implementations • 9 Jun 2020 • Hanna Benoni, Daniel Harari, Shimon Ullman
Subjects were assigned to one of nine exposure conditions: 200, 500, 1000, 2000 ms with or without masking, as well as unlimited time.
no code implementations • 13 Dec 2018 • Daniel Harari
These two aspects are inter-related in the current study, since image motion is used for internal supervision, via the detection of spatiotemporal events of active-motion and the use of tracking.
no code implementations • 10 Apr 2018 • Daniel Harari, Joshua B. Tenenbaum, Shimon Ullman
Second, we use a human study to demonstrate the sensitivity of humans to joint attention, suggesting that the detection of such a configuration in an image can be useful for understanding the image, including the goals of the agents and their joint activity, and therefore can contribute to image captioning and related tasks.
no code implementations • 10 Apr 2018 • Hadar Gorodissky, Daniel Harari, Shimon Ullman
The growing use of convolutional neural networks (CNN) for a broad range of visual tasks, including tasks involving fine details, raises the problem of applying such networks to a large field of view, since the amount of computations increases significantly with the number of pixels.
no code implementations • 29 Nov 2016 • Daniel Harari, Tao Gao, Nancy Kanwisher, Joshua Tenenbaum, Shimon Ullman
How accurate are humans in determining the gaze direction of others in lifelike scenes, when they can move their heads and eyes freely, and what are the sources of information for the underlying perceptual processes?
no code implementations • 30 Oct 2016 • Shimon Ullman, Nimrod Dorfman, Daniel Harari
Current artificial learning systems can recognize thousands of visual categories, or play Go at a champion"s level, but cannot explain infants learning, in particular the ability to learn complex concepts without guidance, in a specific order.
no code implementations • EMNLP 2015 • Yevgeni Berzak, Andrei Barbu, Daniel Harari, Boris Katz, Shimon Ullman
Understanding language goes hand in hand with the ability to integrate complex contextual information obtained via perception.
1 code implementation • 8 Dec 2014 • Tao Gao, Daniel Harari, Joshua Tenenbaum, Shimon Ullman
(1) Human accuracy of discriminating targets 8{\deg}-10{\deg} of visual angle apart is around 40% in a free looking gaze task; (2) The ability to interpret gaze of different lookers vary dramatically; (3) This variance can be captured by the computational model; (4) Human outperforms the current model significantly.