no code implementations • 29 Jan 2024 • Serena Bono, Spandan Madan, Ishaan Grover, Mao Yasueda, Cynthia Breazeal, Hanspeter Pfister, Gabriel Kreiman
Here we present a new methodology to evaluate such generalization of RL agents under small shifts in the transition probabilities.
1 code implementation • 20 Mar 2023 • Trenton Bricken, Xander Davies, Deepak Singh, Dmitry Krotov, Gabriel Kreiman
Continual learning is a problem for artificial neural networks that their biological counterparts are adept at solving.
1 code implementation • 28 Feb 2023 • Christopher Wang, Vighnesh Subramaniam, Adam Uri Yaari, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu
We create a reusable Transformer, BrainBERT, for intracranial recordings bringing modern representation learning approaches to neuroscience.
no code implementations • 10 Feb 2023 • Ravi Srinivasan, Francesca Mignacco, Martino Sorbaro, Maria Refinetti, Avi Cooper, Gabriel Kreiman, Giorgia Dellaferrera
"Forward-only" algorithms, which train neural networks while avoiding a backward pass, have recently gained attention as a way of solving the biologically unrealistic aspects of backpropagation.
1 code implementation • ICCV 2023 • Parantak Singh, You Li, Ankur Sikarwar, Weixian Lei, Daniel Gao, Morgan Bruce Talbot, Ying Sun, Mike Zheng Shou, Gabriel Kreiman, Mengmi Zhang
For example, when we learn mathematics at school, we build upon our knowledge of addition to learn multiplication.
no code implementations • 24 Nov 2022 • Zhiwei Ding, Xuezhe Ren, Erwan David, Melissa Vo, Gabriel Kreiman, Mengmi Zhang
Target modulation is computed as patch-wise local relevance between the target and search images, whereas contextual modulation is applied in a global fashion.
no code implementations • 23 Nov 2022 • Mengmi Zhang, Giorgia Dellaferrera, Ankur Sikarwar, Marcelo Armendariz, Noga Mudrik, Prachi Agrawal, Spandan Madan, Andrei Barbu, Haochen Yang, Tanishq Kumar, Meghna Sadwani, Stella Dellaferrera, Michele Pizzochero, Hanspeter Pfister, Gabriel Kreiman
To address this question, we turn to the Turing test and systematically benchmark current AIs in their abilities to imitate humans.
no code implementations • 23 Nov 2022 • Xiao Liu, Ankur Sikarwar, Gabriel Kreiman, Zenglin Shi, Mengmi Zhang
To better accommodate the object-centric nature of current downstream tasks such as object recognition and detection, various methods have been proposed to suppress contextual biases or disentangle objects from contexts.
2 code implementations • 5 Sep 2022 • Stephen Casper, Taylor Killian, Gabriel Kreiman, Dylan Hadfield-Menell
In this work, we study white-box adversarial policies and show that having access to a target agent's internal state can be useful for identifying its vulnerabilities.
1 code implementation • 15 Jun 2022 • Spandan Madan, You Li, Mengmi Zhang, Hanspeter Pfister, Gabriel Kreiman
We present a new perspective on bridging the generalization gap between biological and computer vision -- mimicking the human visual diet.
1 code implementation • 27 Jan 2022 • Giorgia Dellaferrera, Gabriel Kreiman
Supervised learning in artificial neural networks typically relies on backpropagation, where the weights are updated based on the error-function gradients and sequentially propagated from the output layer to the input layer.
no code implementations • 11 Jan 2022 • Ankur Sikarwar, Gabriel Kreiman
In recent years, multi-modal transformers have shown significant progress in Vision-Language tasks, such as Visual Question Answering (VQA), outperforming previous architectures by a considerable margin.
2 code implementations • 7 Oct 2021 • Stephen Casper, Max Nadeau, Dylan Hadfield-Menell, Gabriel Kreiman
We demonstrate that they can be used to produce targeted, universal, disguised, physically-realizable, and black-box attacks at the ImageNet scale.
1 code implementation • NeurIPS 2021 • Shashi Kant Gupta, Mengmi Zhang, Chia-Chien Wu, Jeremy M. Wolfe, Gabriel Kreiman
To elucidate the mechanisms responsible for asymmetry in visual search, we propose a computational model that takes a target and a search image as inputs and produces a sequence of eye movements until the target is found.
1 code implementation • 19 Apr 2021 • Guy Ben-Yosef, Gabriel Kreiman, Shimon Ullman
In human vision objects and their parts can be visually recognized from purely spatial or purely temporal information but the mechanisms integrating space and time are poorly understood.
1 code implementation • 6 Apr 2021 • Morgan B. Talbot, Rushikesh Zawar, Rohil Badkundri, Mengmi Zhang, Gabriel Kreiman
To address the limited number of existing online stream learning datasets, we introduce 2 new benchmarks by adapting existing datasets for stream learning.
1 code implementation • ICCV 2021 • Philipp Bomatter, Mengmi Zhang, Dimitar Karev, Spandan Madan, Claire Tseng, Gabriel Kreiman
Our model captures useful information for contextual reasoning, enabling human-level performance and better robustness in out-of-context conditions compared to baseline models across OCD and other out-of-context datasets.
1 code implementation • 5 Jan 2021 • Mengmi Zhang, Marcelo Armendariz, Will Xiao, Olivia Rose, Katarina Bendtz, Margaret Livingstone, Carlos Ponce, Gabriel Kreiman
Primates constantly explore their surroundings via saccadic eye movements that bring different parts of an image into high resolution.
no code implementations • 11 Nov 2020 • Li Yuan, Will Xiao, Giorgia Dellaferrera, Gabriel Kreiman, Francis E. H. Tay, Jiashi Feng, Margaret S. Livingstone
Here we propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior.
1 code implementation • 25 May 2020 • Mengmi Zhang, Gabriel Kreiman
Using those error fixations, we developed a model (InferNet) to infer what the target was.
1 code implementation • CVPR 2020 • Vincent Jacquot, Zhuofan Ying, Gabriel Kreiman
Deep Learning has driven recent and exciting progress in computer vision, instilling the belief that these algorithms could solve any visual task.
1 code implementation • 10 Dec 2019 • Stephen Casper, Xavier Boix, Vanessa D'Amario, Ling Guo, Martin Schrimpf, Kasper Vinken, Gabriel Kreiman
We identify two distinct types of "frivolous" units that proliferate when the network's width is increased: prunable units which can be dropped out of the network without significant change to the output and redundant units whose activities can be expressed as a linear combination of others.
1 code implementation • CVPR 2020 • Mengmi Zhang, Claire Tseng, Gabriel Kreiman
To model the role of contextual information in visual recognition, we systematically investigated ten critical properties of where, when, and how context modulates recognition, including the amount of context, context and object resolution, geometrical structure of context, context congruence, and temporal dynamics of contextual modulation.
1 code implementation • 23 May 2019 • Mengmi Zhang, Tao Wang, Joo Hwee Lim, Gabriel Kreiman, Jiashi Feng
In each classification task, our method learns a set of variational prototypes with their means and variances, where embedding of the samples from the same class can be represented in a prototypical distribution and class-representative prototypes are separated apart.
1 code implementation • 1 May 2019 • Will Xiao, Gabriel Kreiman
To circumvent this problem, we developed a method for gradient-free activation maximization by combining a generative neural network with a genetic algorithm.
no code implementations • 1 Feb 2019 • Mengmi Zhang, Claire Tseng, Karla Montejo, Joseph Kwon, Gabriel Kreiman
Context reasoning is critical in a wide variety of applications where current inputs need to be interpreted in the light of previous experience and knowledge.
1 code implementation • 31 Jul 2018 • Mengmi Zhang, Gabriel Kreiman
Using those error fixations, we developed a model (InferNet) to infer what the target was.
no code implementations • 28 May 2018 • William Lotter, Gabriel Kreiman, David Cox
Interestingly, recent work has shown that deep convolutional neural networks (CNNs) trained on large-scale image recognition tasks can serve as strikingly good models for predicting the responses of neurons in visual cortex to visual stimuli, suggesting that analogies between artificial and biological neural networks may be more than superficial.
no code implementations • 6 Mar 2018 • Kevin Wu, Eric Wu, Gabriel Kreiman
We use a biologically inspired two-part convolutional neural network ('GistNet') that models the fovea and periphery to provide a proof-of-principle demonstration that computational object recognition can significantly benefit from the gist of the scene as contextual information.
1 code implementation • 7 Jun 2017 • Hanlin Tang, Martin Schrimpf, Bill Lotter, Charlotte Moerman, Ana Paredes, Josue Ortega Caro, Walter Hardesty, David Cox, Gabriel Kreiman
First, subjects robustly recognized objects even when rendered <15% visible, but recognition was largely impaired when processing was interrupted by backward masking.
1 code implementation • 23 Mar 2017 • Nicholas Cheney, Martin Schrimpf, Gabriel Kreiman
We show that convolutional networks are surprisingly robust to a number of internal perturbations in the higher convolutional layers but the bottom convolutional layers are much more fragile.
17 code implementations • 25 May 2016 • William Lotter, Gabriel Kreiman, David Cox
Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world.
2 code implementations • 19 Nov 2015 • William Lotter, Gabriel Kreiman, David Cox
The ability to predict future states of the environment is a central pillar of intelligence.