Search Results for author: Aida Nematzadeh

Found 27 papers, 13 papers with code

Vision-Language Pretraining: Current Trends and the Future

no code implementations ACL 2022 Aishwarya Agrawal, Damien Teney, Aida Nematzadeh

In addition to the larger pretraining datasets, the transformer architecture (Vaswani et al., 2017) and in particular self-attention applied to two modalities are responsible for the impressive performance of the recent pretrained models on downstream tasks (Hendricks et al., 2021).

Question Answering Representation Learning +1

How FaR Are Large Language Models From Agents with Theory-of-Mind?

no code implementations4 Oct 2023 Pei Zhou, Aman Madaan, Srividya Pranavi Potharaju, Aditya Gupta, Kevin R. McKee, Ari Holtzman, Jay Pujara, Xiang Ren, Swaroop Mishra, Aida Nematzadeh, Shyam Upadhyay, Manaal Faruqui

We propose a new evaluation paradigm for large language models (LLMs): Thinking for Doing (T4D), which requires models to connect inferences about others' mental states to actions in social scenarios.

In-Context Learning Question Answering

Weakly-Supervised Learning of Visual Relations in Multimodal Pretraining

1 code implementation23 May 2023 Emanuele Bugliarello, Aida Nematzadeh, Lisa Anne Hendricks

Recent work in vision-and-language pretraining has investigated supervised signals from object detection data to learn better, fine-grained multimodal representations.

object-detection Object Detection +2

Measuring Progress in Fine-grained Vision-and-Language Understanding

2 code implementations12 May 2023 Emanuele Bugliarello, Laurent Sartran, Aishwarya Agrawal, Lisa Anne Hendricks, Aida Nematzadeh

While pretraining on large-scale image-text data from the Web has facilitated rapid progress on many vision-and-language (V&L) tasks, recent work has demonstrated that pretrained models lack "fine-grained" understanding, such as the ability to recognise relationships, verbs, and numbers in images.

Visual Reasoning

Evaluating Visual Number Discrimination in Deep Neural Networks

no code implementations13 Mar 2023 Ivana Kajić, Aida Nematzadeh

The ability to discriminate between large and small quantities is a core aspect of basic numerical competence in both humans and animals.

Pragmatics in Language Grounding: Phenomena, Tasks, and Modeling Approaches

no code implementations15 Nov 2022 Daniel Fried, Nicholas Tomlin, Jennifer Hu, Roma Patel, Aida Nematzadeh

People rely heavily on context to enrich meaning beyond what is literally said, enabling concise but effective communication.

Grounded language learning

Reassessing Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization

no code implementations24 May 2022 Aishwarya Agrawal, Ivana Kajić, Emanuele Bugliarello, Elnaz Davoodi, Anita Gergely, Phil Blunsom, Aida Nematzadeh

Vision-and-language (V&L) models pretrained on large-scale multimodal data have demonstrated strong performance on various tasks such as image captioning and visual question answering (VQA).

Image Captioning Out-of-Distribution Generalization +3

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

2 code implementations NA 2021 Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent SIfre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, Geoffrey Irving

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

Abstract Algebra Anachronisms +133

A Systematic Investigation of Commonsense Knowledge in Large Language Models

no code implementations31 Oct 2021 Xiang Lorraine Li, Adhiguna Kuncoro, Jordan Hoffmann, Cyprien de Masson d'Autume, Phil Blunsom, Aida Nematzadeh

Language models (LMs) trained on large amounts of data have shown impressive performance on many NLP tasks under the zero-shot and few-shot setup.

Probing Image-Language Transformers for Verb Understanding

1 code implementation Findings (ACL) 2021 Lisa Anne Hendricks, Aida Nematzadeh

Multimodal image-language transformers have achieved impressive results on a variety of tasks that rely on fine-tuning (e. g., visual question answering and image retrieval).

Image Retrieval Question Answering +3

Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers

1 code implementation31 Jan 2021 Lisa Anne Hendricks, John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac, Aida Nematzadeh

Recently multimodal transformer models have gained popularity because their performance on language and vision tasks suggest they learn rich visual-linguistic representations.

Image Retrieval Retrieval +2

Competition in Cross-situational Word Learning: A Computational Study

no code implementations6 Dec 2020 Aida Nematzadeh, Zahra Shekarchi, Thomas L. Griffiths, Suzanne Stevenson

Children learn word meanings by tapping into the commonalities across different situations in which words are used and overcome the high level of uncertainty involved in early word learning experiences.

Visual Grounding in Video for Unsupervised Word Translation

1 code implementation CVPR 2020 Gunnar A. Sigurdsson, Jean-Baptiste Alayrac, Aida Nematzadeh, Lucas Smaira, Mateusz Malinowski, João Carreira, Phil Blunsom, Andrew Zisserman

Given this shared embedding we demonstrate that (i) we can map words between the languages, particularly the 'visual' words; (ii) that the shared embedding provides a good initialization for existing unsupervised text-based word translation techniques, forming the basis for our proposed hybrid visual-text mapping algorithm, MUVE; and (iii) our approach achieves superior performance by addressing the shortcomings of text-based methods -- it is more robust, handles datasets with less commonality, and is applicable to low-resource languages.

Translation Visual Grounding +1

Language Learning and Processing in People and Machines

no code implementations NAACL 2019 Aida Nematzadeh, Richard Futrell, Roger Levy

We explain the current computational models of language acquisition, their limitations, and how the insights from these models can be incorporated into NLP applications.

Language Acquisition Machine Translation +2

Exploiting Attention to Reveal Shortcomings in Memory Models

no code implementations WS 2018 Kaylee Burns, Aida Nematzadeh, Erin Grant, Alison Gopnik, Tom Griffiths

The decision making processes of deep networks are difficult to understand and while their accuracy often improves with increased architectural complexity, so too does their opacity.

BIG-bench Machine Learning Decision Making +2

Evaluating Theory of Mind in Question Answering

2 code implementations EMNLP 2018 Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, Thomas L. Griffiths

We propose a new dataset for evaluating question answering models with respect to their capacity to reason about beliefs.

Question Answering

Learning Hierarchical Visual Representations in Deep Neural Networks Using Hierarchical Linguistic Labels

no code implementations19 May 2018 Joshua C. Peterson, Paul Soulos, Aida Nematzadeh, Thomas L. Griffiths

Modern convolutional neural networks (CNNs) are able to achieve human-level object classification accuracy on specific tasks, and currently outperform competing models in explaining complex human visual representations.

Predicting and Explaining Human Semantic Search in a Cognitive Model

2 code implementations WS 2018 Filip Miscevic, Aida Nematzadeh, Suzanne Stevenson

Recent work has attempted to characterize the structure of semantic memory and the search algorithms which, together, best approximate human patterns of search revealed in a semantic fluency task.

Language Acquisition

Calculating Probabilities Simplifies Word Learning

no code implementations22 Feb 2017 Aida Nematzadeh, Barend Beekhuizen, Shanshan Huang, Suzanne Stevenson

Children can use the statistical regularities of their environment to learn word meanings, a mechanism known as cross-situational learning.

The Interaction of Memory and Attention in Novel Word Generalization: A Computational Investigation

1 code implementation18 Feb 2016 Erin Grant, Aida Nematzadeh, Suzanne Stevenson

People exhibit a tendency to generalize a novel noun to the basic-level in a hierarchical taxonomy -- a cognitively salient category such as "dog" -- with the degree of generalization depending on the number and type of exemplars.

Simple Search Algorithms on Semantic Networks Learned from Language Use

1 code implementation10 Feb 2016 Aida Nematzadeh, Filip Miscevic, Suzanne Stevenson

Recent empirical and modeling research has focused on the semantic fluency task because it is informative about semantic memory.

Open-Ended Question Answering Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.