no code implementations • 26 Mar 2024 • Oscar Mañas, Pietro Astolfi, Melissa Hall, Candace Ross, Jack Urbanek, Adina Williams, Aishwarya Agrawal, Adriana Romero-Soriano, Michal Drozdzal
In this paper, we address these challenges and introduce a T2I optimization-by-prompting framework, OPT2I, which leverages a large language model (LLM) to improve prompt-image consistency in T2I models.
no code implementations • 21 Mar 2024 • Jonathan Lebensold, Maziar Sanjabi, Pietro Astolfi, Adriana Romero-Soriano, Kamalika Chaudhuri, Mike Rabbat, Chuan Guo
Text-to-image diffusion models have been shown to suffer from sample-level memorization, possibly reproducing near-perfect replica of images that they are trained on, which may be undesirable.
1 code implementation • 3 Jan 2024 • Aarash Feizi, Randall Balestriero, Adriana Romero-Soriano, Reihaneh Rabbany
Any prior knowledge can now be embedded into that metric space independently from the employed DA.
1 code implementation • 14 Dec 2023 • Jack Urbanek, Florian Bordes, Pietro Astolfi, Mary Williamson, Vasu Sharma, Adriana Romero-Soriano
Curation methods for massive vision-language datasets trade off between dataset size and quality.
no code implementations • 29 Sep 2023 • Reyhane Askari Hemmat, Mohammad Pezeshki, Florian Bordes, Michal Drozdzal, Adriana Romero-Soriano
In this work, we introduce a framework for augmenting static datasets with useful synthetic samples, which leverages one-shot feedback from the classifier to drive the sampling of the generative model.
1 code implementation • 27 May 2023 • Liheng Ma, Chen Lin, Derek Lim, Adriana Romero-Soriano, Puneet K. Dokania, Mark Coates, Philip Torr, Ser-Nam Lim
Graph inductive biases are crucial for Graph Transformers, and previous works incorporate them using message-passing modules and/or positional encodings.
Ranked #1 on Node Classification on PATTERN
1 code implementation • 15 May 2023 • Enrico Fini, Pietro Astolfi, Adriana Romero-Soriano, Jakob Verbeek, Michal Drozdzal
Indeed, we find that a simple CLIP baseline can also be improved substantially, up to a 25% relative improvement on downstream zero-shot tasks, by using well-known training techniques that are popular in other subfields.
no code implementations • 26 Apr 2023 • Arantxa Casanova, Marlène Careil, Adriana Romero-Soriano, Christopher J. Pal, Jakob Verbeek, Michal Drozdzal
Our experiments on the OI dataset show that M&Ms outperforms baselines in terms of fine-grained scene controllability while being very competitive in terms of image quality and sample diversity.
no code implementations • 16 Mar 2023 • Pietro Astolfi, Arantxa Casanova, Jakob Verbeek, Pascal Vincent, Adriana Romero-Soriano, Michal Drozdzal
We showcase the benefits of DA_IC-GAN by plugging it out-of-the-box into the supervised training of ResNets and DeiT models on the ImageNet dataset, and achieving accuracy boosts up to between 1%p and 2%p with the highest capacity models.
1 code implementation • 15 Feb 2023 • Bahare Fatemi, Quentin Duval, Rohit Girdhar, Michal Drozdzal, Adriana Romero-Soriano
Recipe personalization through ingredient substitution has the potential to help people meet their dietary needs and preferences, avoid potential allergens, and ease culinary exploration in everyone's kitchen.
1 code implementation • 2 Jan 2023 • Sumana Basu, Marc-André Legault, Adriana Romero-Soriano, Doina Precup
Drug dosing is an important application of AI, which can be formulated as a Reinforcement Learning (RL) problem.
1 code implementation • 3 Oct 2022 • Edward J. Smith, Michal Drozdzal, Derek Nowrouzezahrai, David Meger, Adriana Romero-Soriano
We evaluate our proposed approach on the ABC dataset and the in the wild CO3D dataset, and show that: (1) we are able to obtain high quality state-of-the-art occupancy reconstructions; (2) our perspective conditioned uncertainty definition is effective to drive improvements in next best view selection and outperforms strong baseline approaches; and (3) we can further improve shape understanding by performing a gradient-based search on the view selection candidates.
1 code implementation • 20 Jul 2022 • Aarash Feizi, Arantxa Casanova, Adriana Romero-Soriano, Reihaneh Rabbany
In this paper, we propose revisited versions for two recent hotel recognition datasets: Hotels50K and Hotel-ID.
1 code implementation • 30 Mar 2022 • Tim Bakker, Matthew Muckley, Adriana Romero-Soriano, Michal Drozdzal, Luis Pineda
Most current approaches to undersampled multi-coil MRI reconstruction focus on learning the reconstruction model for a fixed, equidistant acquisition trajectory.
1 code implementation • NeurIPS 2021 • Boris Knyazev, Michal Drozdzal, Graham W. Taylor, Adriana Romero-Soriano
We introduce a large-scale dataset of diverse computational graphs of neural architectures - DeepNets-1M - and use it to explore parameter prediction on CIFAR-10 and ImageNet.
Ranked #1 on Parameter Prediction on CIFAR10
1 code implementation • NeurIPS 2021 • Arantxa Casanova, Marlène Careil, Jakob Verbeek, Michal Drozdzal, Adriana Romero-Soriano
Generative Adversarial Networks (GANs) can generate near photo realistic images in narrow domains such as human faces.
Ranked #1 on Conditional Image Generation on ImageNet 64x64
no code implementations • 9 May 2021 • Liheng Ma, Reihaneh Rabbany, Adriana Romero-Soriano
In this framework, the positional embeddings are learned by a model predictive of the graph context, plugged into an enhanced GAT architecture, which is able to leverage both the positional and content information of each node.
no code implementations • 7 Dec 2020 • Arantxa Casanova, Michal Drozdzal, Adriana Romero-Soriano
In this paper, we propose a methodology to compare complex scene conditional generation models, and provide an in-depth analysis that assesses the ability of each model to (1) fit the training distribution and hence perform well on seen conditionings, (2) to generalize to unseen conditionings composed of seen object combinations, and (3) generalize to unseen conditionings composed of unseen object combinations.