Search Results for author: Deqing Fu

Found 8 papers, 1 papers with code

IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations

no code implementations1 Apr 2024 Deqing Fu, Ghazal Khalighinejad, Ollie Liu, Bhuwan Dhingra, Dani Yogatama, Robin Jia, Willie Neiswanger

Current foundation models exhibit impressive capabilities when prompted either with text only or with both image and text inputs.

Benchmarking Math

Simplicity Bias of Transformers to Learn Low Sensitivity Functions

no code implementations11 Mar 2024 Bhavya Vasudeva, Deqing Fu, Tianyi Zhou, Elliott Kau, Youqi Huang, Vatsal Sharan

Transformers achieve state-of-the-art accuracy and robustness across many tasks, but an understanding of the inductive biases that they have and how those biases are different from other neural network architectures remains elusive.

DeLLMa: A Framework for Decision Making Under Uncertainty with Large Language Models

no code implementations4 Feb 2024 Ollie Liu, Deqing Fu, Dani Yogatama, Willie Neiswanger

Large language models (LLMs) are increasingly used across society, including in domains like business, engineering, and medicine.

Decision Making Decision Making Under Uncertainty +2

DreamSync: Aligning Text-to-Image Generation with Image Understanding Feedback

no code implementations29 Nov 2023 Jiao Sun, Deqing Fu, Yushi Hu, Su Wang, Royi Rassin, Da-Cheng Juan, Dana Alon, Charles Herrmann, Sjoerd van Steenkiste, Ranjay Krishna, Cyrus Rashtchian

Then, it uses two VLMs to select the best generation: a Visual Question Answering model that measures the alignment of generated images to the text, and another that measures the generation's aesthetic quality.

Question Answering Text-to-Image Generation +1

Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models

no code implementations26 Oct 2023 Deqing Fu, Tian-Qi Chen, Robin Jia, Vatsal Sharan

In this paper, we instead demonstrate that Transformers learn to implement higher-order optimization methods to perform ICL.

In-Context Learning

SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative Examples

1 code implementation13 May 2023 Deqing Fu, Ameya Godbole, Robin Jia

In this work, we propose Self-labeled Counterfactuals for Extrapolating to Negative Examples (SCENE), an automatic method for synthesizing training data that greatly improves models' ability to detect challenging negative examples.

Data Augmentation Natural Language Inference +2

Topological Regularization for Dense Prediction

no code implementations22 Nov 2021 Deqing Fu, Bradley J. Nelson

Dense prediction tasks such as depth perception and semantic segmentation are important applications in computer vision that have a concrete topological description in terms of partitioning an image into connected components or estimating a function with a small number of local extrema corresponding to objects in the image.

Semantic Segmentation

Harnessing the Conditioning Sensorium for Improved Image Translation

no code implementations ICCV 2021 Cooper Nederhood, Nicholas Kolkin, Deqing Fu, Jason Salavon

Multi-modal domain translation typically refers to synthesizing a novel image that inherits certain localized attributes from a 'content' image (e. g. layout, semantics, or geometry), and inherits everything else (e. g. texture, lighting, sometimes even semantics) from a 'style' image.

Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.