no code implementations • ECCV 2020 • Tsu-Jui Fu, Xin Eric Wang, Matthew F. Peterson,Scott T. Grafton, Miguel P. Eckstein, William Yang Wang
In particular, we present a model-agnostic adversarial path sampler (APS) that learns to sample challenging paths that force the navigator to improve based on the navigation performance.
no code implementations • 28 Jan 2022 • Weimin Zhou, Miguel P. Eckstein
We demonstrate that the search strategy corresponding to the Q-network is consistent with the IS search strategy.
no code implementations • 29 May 2021 • Aditya Jonnalagadda, William Yang Wang, B. S. Manjunath, Miguel P. Eckstein
We propose Foveated Transformer (FoveaTer) model, which uses pooling regions and eye movements to perform object classification tasks using a Vision Transformer architecture.
no code implementations • 29 Apr 2021 • Shravan Murlidaran, William Yang Wang, Miguel P. Eckstein
Results show that the machine/human agreement scene descriptions are much lower than human/human agreement for our complex scenes.
no code implementations • 17 Apr 2021 • Nicole X. Han, William Yang Wang, Miguel P. Eckstein
Making accurate inferences about other individuals' locus of attention is essential for human social interactions and will be important for AI to effectively interact with humans.
no code implementations • CVPR 2022 • Tsu-Jui Fu, Xin Eric Wang, Scott T. Grafton, Miguel P. Eckstein, William Yang Wang
LBVE contains two features: 1) the scenario of the source video is preserved instead of generating a completely different video; 2) the semantic is presented differently in the target video, and all changes are controlled by the given instruction.
no code implementations • 9 Feb 2021 • Miguel A. Lago, Craig K. Abbey, Miguel P. Eckstein
We show that the index of detectability across eccentricities weighted using the eye movement patterns of observers best predicted human performance in 2D vs. 3D search performance for a small microcalcification-like signal and a larger mass-like.
no code implementations • 3 Nov 2020 • Miguel A. Lago, Craig K. Abbey, Miguel P. Eckstein
Here, we compared standard linear model observers (ideal observers, non-pre-whitening matched filter with eye filter, and various versions of Channelized Hotelling models) to human performance searching in 3D 1/f$^{2. 8}$ filtered noise images and assessed its relationship to the more traditional location known exactly detection tasks and 2D search.
no code implementations • CVPR 2019 • Arturo Deza, Amit Surana, Miguel P. Eckstein
With the advent of modern expert systems driven by deep learning that supplement human experts (e. g. radiologists, dermatologists, surveillance scanners), we analyze how and when do such expert systems enhance human performance in a fine-grained small target visual search task.
1 code implementation • NeurIPS 2016 • Arturo Deza, Miguel P. Eckstein
Here, we introduce a new foveated clutter model to predict the detrimental effects in target search utilizing a forced fixation search task.
1 code implementation • 4 Aug 2014 • Emre Akbas, Miguel P. Eckstein
Similar to the human visual system, the FOD has higher resolution at the fovea and lower resolution at the visual periphery.