Search Results for author: Zoya Bylinskii

Found 14 papers, 5 papers with code

Towards Better User Studies in Computer Graphics and Vision

no code implementations23 Jun 2022 Zoya Bylinskii, Laura Herman, Aaron Hertzmann, Stefanie Hutka, Yile Zhang

We discuss foundational user research methods (e. g., needfinding) that are presently underutilized in computer vision and graphics research, but can provide valuable project direction.

KDSalBox: A toolbox of efficient knowledge-distilled saliency models

no code implementations NeurIPS Workshop SVRHM 2021 Ard Kastrati, Zoya Bylinskii, Eli Shechtman

Dozens of saliency models have been designed over the last few decades, targeted at diverse applications ranging from image compression and retargeting to robot navigation, surveillance, and distractor detection.

Image Compression Robot Navigation

Memorability: An image-computable measure of information utility

no code implementations1 Apr 2021 Zoya Bylinskii, Lore Goetschalckx, Anelise Newman, Aude Oliva

The pixels in an image, and the objects, scenes, and actions that they compose, determine whether an image will be memorable or forgettable.

Toward Quantifying Ambiguities in Artistic Images

no code implementations21 Aug 2020 Xi Wang, Zoya Bylinskii, Aaron Hertzmann, Robert Pepperell

It has long been hypothesized that perceptual ambiguities play an important role in aesthetic experience: a work with some ambiguity engages a viewer more than one that does not.

Look here! A parametric learning based approach to redirect visual attention

no code implementations ECCV 2020 Youssef Alami Mejjati, Celso F. Gomez, Kwang In Kim, Eli Shechtman, Zoya Bylinskii

Extensions of our model allow for multi-style edits and the ability to both increase and attenuate attention in an image region.

Marketing

Predicting Visual Importance Across Graphic Design Types

no code implementations7 Aug 2020 Camilo Fosco, Vincent Casser, Amish Kumar Bedi, Peter O'Donovan, Aaron Hertzmann, Zoya Bylinskii

This paper introduces a Unified Model of Saliency and Importance (UMSI), which learns to predict visual importance in input graphic designs, and saliency in natural images, along with a new dataset and applications.

Bottom-up Attention, Models of

no code implementations11 Oct 2018 Ali Borji, Hamed R. -Tavakoli, Zoya Bylinskii

In this review, we examine the recent progress in saliency prediction and proposed several avenues for future research.

Saliency Prediction

Synthetically Trained Icon Proposals for Parsing and Summarizing Infographics

1 code implementation27 Jul 2018 Spandan Madan, Zoya Bylinskii, Matthew Tancik, Adrià Recasens, Kimberli Zhong, Sami Alsheikh, Hanspeter Pfister, Aude Oliva, Fredo Durand

While automatic text extraction works well on infographics, computer vision approaches trained on natural images fail to identify the stand-alone visual elements in infographics, or `icons'.

Synthetic Data Generation

Understanding Infographics through Textual and Visual Tag Prediction

1 code implementation26 Sep 2017 Zoya Bylinskii, Sami Alsheikh, Spandan Madan, Adria Recasens, Kimberli Zhong, Hanspeter Pfister, Fredo Durand, Aude Oliva

And second, we use these predicted text tags as a supervisory signal to localize the most diagnostic visual elements from within the infographic i. e. visual hashtags.

TAG

BubbleView: an interface for crowdsourcing image importance maps and tracking visual attention

no code implementations16 Feb 2017 Nam Wook Kim, Zoya Bylinskii, Michelle A. Borkin, Krzysztof Z. Gajos, Aude Oliva, Fredo Durand, Hanspeter Pfister

In this paper, we present BubbleView, an alternative methodology for eye tracking using discrete mouse clicks to measure which information people consciously choose to examine.

What do different evaluation metrics tell us about saliency models?

1 code implementation12 Apr 2016 Zoya Bylinskii, Tilke Judd, Aude Oliva, Antonio Torralba, Frédo Durand

How best to evaluate a saliency model's ability to predict where humans look in images is an open research question.

Are all training examples equally valuable?

no code implementations25 Nov 2013 Agata Lapedriza, Hamed Pirsiavash, Zoya Bylinskii, Antonio Torralba

When learning a new concept, not all training examples may prove equally useful for training: some may have higher or lower training value than others.

Cannot find the paper you are looking for? You can Submit a new open access paper.