Search Results for author: Dina Katabi

Found 35 papers, 18 papers with code

Learning Vision from Models Rivals Learning Vision from Data

1 code implementation28 Dec 2023 Yonglong Tian, Lijie Fan, KaiFeng Chen, Dina Katabi, Dilip Krishnan, Phillip Isola

We introduce SynCLR, a novel approach for learning visual representations exclusively from synthetic images and synthetic captions, without any real data.

Contrastive Learning Image Captioning +3

The Limits of Fair Medical Imaging AI In The Wild

1 code implementation11 Dec 2023 Yuzhe Yang, Haoran Zhang, Judy W Gichoya, Dina Katabi, Marzyeh Ghassemi

As artificial intelligence (AI) rapidly approaches human-level performance in medical imaging, it is crucial that it does not exacerbate or propagate healthcare disparities.

Fairness

Scaling Laws of Synthetic Images for Model Training ... for Now

1 code implementation7 Dec 2023 Lijie Fan, KaiFeng Chen, Dilip Krishnan, Dina Katabi, Phillip Isola, Yonglong Tian

Our findings also suggest that scaling synthetic data can be particularly effective in scenarios such as: (1) when there is a limited supply of real images for a supervised problem (e. g., fewer than 0. 5 million images in ImageNet), (2) when the evaluation dataset diverges significantly from the training data, indicating the out-of-distribution scenario, or (3) when synthetic data is used in conjunction with real images, as demonstrated in the training of CLIP models.

Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency

no code implementations5 Oct 2023 Tianhong Li, Sangnie Bhardwaj, Yonglong Tian, Han Zhang, Jarred Barber, Dina Katabi, Guillaume Lajoie, Huiwen Chang, Dilip Krishnan

We demonstrate image generation and captioning performance on par with state-of-the-art text-to-image and image-to-text models with orders of magnitude fewer (only 3M) paired image-text data.

Text-to-Image Generation

Unsupervised Object Localization with Representer Point Selection

1 code implementation ICCV 2023 Yeonghwan Song, Seokwoo Jang, Dina Katabi, Jeany Son

We propose a novel unsupervised object localization method that allows us to explain the predictions of the model by utilizing self-supervised pre-trained models without additional finetuning.

Object Unsupervised Object Localization

Improving CLIP Training with Language Rewrites

1 code implementation NeurIPS 2023 Lijie Fan, Dilip Krishnan, Phillip Isola, Dina Katabi, Yonglong Tian

During training, LaCLIP randomly selects either the original texts or the rewritten versions as text augmentations for each image.

In-Context Learning Sentence

Change is Hard: A Closer Look at Subpopulation Shift

1 code implementation23 Feb 2023 Yuzhe Yang, Haoran Zhang, Dina Katabi, Marzyeh Ghassemi

Machine learning models often perform poorly on subgroups that are underrepresented in the training data.

Model Selection

Contactless Oxygen Monitoring with Gated Transformer

no code implementations6 Dec 2022 Hao He, Yuan Yuan, Ying-Cong Chen, Peng Cao, Dina Katabi

With the increasing popularity of telehealth, it becomes critical to ensure that basic physiological signals can be monitored accurately at home, with minimal patient overhead.

SimPer: Simple Self-Supervised Learning of Periodic Targets

1 code implementation6 Oct 2022 Yuzhe Yang, Xin Liu, Jiang Wu, Silviu Borac, Dina Katabi, Ming-Zher Poh, Daniel McDuff

From human physiology to environmental evolution, important processes in nature often exhibit meaningful and strong periodic or quasi-periodic changes.

Inductive Bias Self-Supervised Learning

Unsupervised Learning for Human Sensing Using Radio Signals

no code implementations6 Jul 2022 Tianhong Li, Lijie Fan, Yuan Yuan, Dina Katabi

Thus, in this paper, we explore the feasibility of adapting RGB-based unsupervised representation learning to RF signals.

Action Recognition Contrastive Learning +3

On Multi-Domain Long-Tailed Recognition, Imbalanced Domain Generalization and Beyond

1 code implementation17 Mar 2022 Yuzhe Yang, Hao Wang, Dina Katabi

We first develop the domain-class transferability graph, and show that such transferability governs the success of learning in MDLT.

Domain Generalization

Unsupervised Domain Generalization by Learning a Bridge Across Domains

1 code implementation CVPR 2022 Sivan Harary, Eli Schwartz, Assaf Arbelle, Peter Staar, Shady Abu-Hussein, Elad Amrani, Roei Herzig, Amit Alfassy, Raja Giryes, Hilde Kuehne, Dina Katabi, Kate Saenko, Rogerio Feris, Leonid Karlinsky

The ability to generalize learned representations across significantly different visual domains, such as between real photos, clipart, paintings, and sketches, is a fundamental capacity of the human visual system.

Domain Generalization Self-Supervised Learning

Targeted Supervised Contrastive Learning for Long-Tailed Recognition

1 code implementation CVPR 2022 Tianhong Li, Peng Cao, Yuan Yuan, Lijie Fan, Yuzhe Yang, Rogerio Feris, Piotr Indyk, Dina Katabi

This forces all classes, including minority classes, to maintain a uniform distribution in the feature space, improves class boundaries, and provides better generalization even in the presence of long-tail data.

Contrastive Learning Long-tail Learning

Delving into Deep Imbalanced Regression

1 code implementation18 Feb 2021 Yuzhe Yang, Kaiwen Zha, Ying-Cong Chen, Hao Wang, Dina Katabi

We define Deep Imbalanced Regression (DIR) as learning from such imbalanced data with continuous targets, dealing with potential missing data for certain target values, and generalizing to the entire target range.

regression

Learning Blood Oxygen from Respiration Signals

no code implementations1 Jan 2021 Hao He, Ying-Cong Chen, Yuan Yuan, Dina Katabi

Further, since breathing can be monitored without body contact by analyzing the radio signal in the environment, we show that oxygen too can be monitored without any wearable devices.

Addressing Feature Suppression in Unsupervised Visual Representations

no code implementations17 Dec 2020 Tianhong Li, Lijie Fan, Yuan Yuan, Hao He, Yonglong Tian, Rogerio Feris, Piotr Indyk, Dina Katabi

However, contrastive learning is susceptible to feature suppression, i. e., it may discard important information relevant to the task of interest, and learn irrelevant features.

Attribute Contrastive Learning +1

In-Home Daily-Life Captioning Using Radio Signals

no code implementations ECCV 2020 Lijie Fan, Tianhong Li, Yuan Yuan, Dina Katabi

This paper aims to caption daily life --i. e., to create a textual description of people's activities and interactions with objects in their homes.

Privacy Preserving Video Captioning

Continuously Indexed Domain Adaptation

1 code implementation ICML 2020 Hao Wang, Hao He, Dina Katabi

Our empirical results show that our approach outperforms the state-of-the-art domain adaption methods on both synthetic and real-world medical datasets.

Continuously Indexed Domain Adaptation

Continuously Index Domain Adaptation

1 code implementation ICML 2020 Hao Wang, Hao He, Dina Katabi

Our empirical results show that our approach outperforms the state-of-the-art domain adaption methods on both synthetic and real-world medical datasets.

Continuously Indexed Domain Adaptation

Self-Supervised Learning of Appliance Usage

no code implementations ICLR 2020 Chen-Yu Hsu, Abbas Zeitoun, Guang-He Lee, Dina Katabi, Tommi Jaakkola

We show that this cross-modal prediction task allows us to detect when a particular appliance is used, and the location of the appliance in the home, all in a self-supervised manner, without any labeled data.

Event Detection Self-Supervised Learning +1

Learning Longterm Representations for Person Re-Identification Using Radio Signals

no code implementations CVPR 2020 Lijie Fan, Tianhong Li, Rongyao Fang, Rumen Hristov, Yuan Yuan, Dina Katabi

RF signals traverse clothes and reflect off the human body; thus they can be used to extract more persistent human-identifying features like body size and shape.

Person Re-Identification Privacy Preserving

Learning Compositional Koopman Operators for Model-Based Control

no code implementations ICLR 2020 Yunzhu Li, Hao He, Jiajun Wu, Dina Katabi, Antonio Torralba

Finding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis.

Harnessing Structures for Value-Based Planning and Reinforcement Learning

1 code implementation ICLR 2020 Yuzhe Yang, Guo Zhang, Zhi Xu, Dina Katabi

In this paper, we propose to exploit the underlying structures of the state-action value function, i. e., Q function, for both planning and deep RL.

Atari Games reinforcement-learning +1

ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation

1 code implementation28 May 2019 Yuzhe Yang, Guo Zhang, Dina Katabi, Zhi Xu

We show that this process destroys the adversarial structure of the noise, while re-enforcing the global structure in the original image.

Adversarial Robustness

Learning-Based Frequency Estimation Algorithms

no code implementations ICLR 2019 Chen-Yu Hsu, Piotr Indyk, Dina Katabi, Ali Vakilian

Estimating the frequencies of elements in a data stream is a fundamental task in data analysis and machine learning.

BIG-bench Machine Learning

Bidirectional Inference Networks: A Class of Deep Bayesian Networks for Health Profiling

no code implementations6 Feb 2019 Hao Wang, Chengzhi Mao, Hao He, Ming-Min Zhao, Tommi S. Jaakkola, Dina Katabi

We consider the problem of inferring the values of an arbitrary set of variables (e. g., risk of diseases) given other observed variables (e. g., symptoms and diagnosed diseases) and high-dimensional signals (e. g., MRI images or EEG).

Computational Efficiency EEG +2

Through-Wall Human Pose Estimation Using Radio Signals

no code implementations CVPR 2018 Ming-Min Zhao, Tianhong Li, Mohammad Abu Alsheikh, Yonglong Tian, Hang Zhao, Antonio Torralba, Dina Katabi

Yet, unlike vision-based pose estimation, the radio-based system can estimate 2D poses through walls despite never trained on such scenarios.

RF-based Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.