1 code implementation • CVPR 2023 • S. Mahdi H. Miangoleh, Zoya Bylinskii, Eric Kee, Eli Shechtman, Yağız Aksoy
We thus offer a viable solution for automating image enhancement and photo cleanup operations.
no code implementations • 23 Jun 2022 • Zoya Bylinskii, Laura Herman, Aaron Hertzmann, Stefanie Hutka, Yile Zhang
We discuss foundational user research methods (e. g., needfinding) that are presently underutilized in computer vision and graphics research, but can provide valuable project direction.
no code implementations • NeurIPS Workshop SVRHM 2021 • Ard Kastrati, Zoya Bylinskii, Eli Shechtman
Dozens of saliency models have been designed over the last few decades, targeted at diverse applications ranging from image compression and retargeting to robot navigation, surveillance, and distractor detection.
no code implementations • 1 Apr 2021 • Zoya Bylinskii, Lore Goetschalckx, Anelise Newman, Aude Oliva
The pixels in an image, and the objects, scenes, and actions that they compose, determine whether an image will be memorable or forgettable.
no code implementations • 21 Aug 2020 • Xi Wang, Zoya Bylinskii, Aaron Hertzmann, Robert Pepperell
It has long been hypothesized that perceptual ambiguities play an important role in aesthetic experience: a work with some ambiguity engages a viewer more than one that does not.
no code implementations • ECCV 2020 • Youssef Alami Mejjati, Celso F. Gomez, Kwang In Kim, Eli Shechtman, Zoya Bylinskii
Extensions of our model allow for multi-style edits and the ability to both increase and attenuate attention in an image region.
no code implementations • 7 Aug 2020 • Camilo Fosco, Vincent Casser, Amish Kumar Bedi, Peter O'Donovan, Aaron Hertzmann, Zoya Bylinskii
This paper introduces a Unified Model of Saliency and Importance (UMSI), which learns to predict visual importance in input graphic designs, and saliency in natural images, along with a new dataset and applications.
no code implementations • 11 Oct 2018 • Ali Borji, Hamed R. -Tavakoli, Zoya Bylinskii
In this review, we examine the recent progress in saliency prediction and proposed several avenues for future research.
1 code implementation • 27 Jul 2018 • Spandan Madan, Zoya Bylinskii, Matthew Tancik, Adrià Recasens, Kimberli Zhong, Sami Alsheikh, Hanspeter Pfister, Aude Oliva, Fredo Durand
While automatic text extraction works well on infographics, computer vision approaches trained on natural images fail to identify the stand-alone visual elements in infographics, or `icons'.
1 code implementation • 26 Sep 2017 • Zoya Bylinskii, Sami Alsheikh, Spandan Madan, Adria Recasens, Kimberli Zhong, Hanspeter Pfister, Fredo Durand, Aude Oliva
And second, we use these predicted text tags as a supervisory signal to localize the most diagnostic visual elements from within the infographic i. e. visual hashtags.
1 code implementation • 8 Aug 2017 • Zoya Bylinskii, Nam Wook Kim, Peter O'Donovan, Sami Alsheikh, Spandan Madan, Hanspeter Pfister, Fredo Durand, Bryan Russell, Aaron Hertzmann
Our models are neural networks trained on human clicks and importance annotations on hundreds of designs.
no code implementations • 16 Feb 2017 • Nam Wook Kim, Zoya Bylinskii, Michelle A. Borkin, Krzysztof Z. Gajos, Aude Oliva, Fredo Durand, Hanspeter Pfister
In this paper, we present BubbleView, an alternative methodology for eye tracking using discrete mouse clicks to measure which information people consciously choose to examine.
1 code implementation • 12 Apr 2016 • Zoya Bylinskii, Tilke Judd, Aude Oliva, Antonio Torralba, Frédo Durand
How best to evaluate a saliency model's ability to predict where humans look in images is an open research question.
no code implementations • 25 Nov 2013 • Agata Lapedriza, Hamed Pirsiavash, Zoya Bylinskii, Antonio Torralba
When learning a new concept, not all training examples may prove equally useful for training: some may have higher or lower training value than others.