no code implementations • 26 Mar 2024 • Mohammad Shahab Sepehri, Zalan Fabian, Mahdi Soltanolkotabi
The landscape of computational building blocks of efficient image restoration architectures is dominated by a combination of convolutional processing and various attention mechanisms.
no code implementations • 2 Nov 2023 • Zalan Fabian, Zhongqi Miao, Chunyuan Li, Yuanhan Zhang, Ziwei Liu, Andrés Hernández, Andrés Montes-Rojas, Rafael Escucha, Laura Siabatto, Andrés Link, Pablo Arbeláez, Rahul Dodhia, Juan Lavista Ferres
In particular, we instruction tune vision-language models to generate detailed visual descriptions of camera trap images using similar terminology to experts.
no code implementations • 12 Sep 2023 • Zalan Fabian, Berk Tınaz, Mahdi Soltanolkotabi
Our framework acts as a wrapper that can be combined with any latent diffusion-based baseline solver, imbuing it with sample-adaptivity and acceleration.
no code implementations • 25 Jul 2023 • Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr
Quasi-Newton methods still face significant challenges in training large-scale neural networks due to additional compute costs in the Hessian related computations and instability issues in stochastic training.
no code implementations • 2 Jul 2023 • Sara Babakniya, Zalan Fabian, Chaoyang He, Mahdi Soltanolkotabi, Salman Avestimehr
Deep learning models are prone to forgetting information learned in the past when trained on new data.
no code implementations • 25 Mar 2023 • Zalan Fabian, Berk Tınaz, Mahdi Soltanolkotabi
In this work, we propose a novel framework for inverse problem solving, namely we assume that the observation comes from a stochastic degradation process that gradually degrades and noises the original clean image.
2 code implementations • 15 Mar 2022 • Zalan Fabian, Berk Tınaz, Mahdi Soltanolkotabi
These models split input images into non-overlapping patches, embed the patches into lower-dimensional tokens and utilize a self-attention mechanism that does not suffer from the aforementioned weaknesses of convolutional architectures.
Ranked #1 on MRI Reconstruction on fastMRI Knee 8x (using extra training data)
no code implementations • 29 Sep 2021 • Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr
SLIM-QN addresses two key barriers in existing second-order methods for large-scale DNNs: 1) the high computational cost of obtaining the Hessian matrix and its inverse in every iteration (e. g. KFAC); 2) convergence instability due to stochastic training (e. g. L-BFGS).
2 code implementations • 28 Jun 2021 • Zalan Fabian, Reinhard Heckel, Mahdi Soltanolkotabi
Deep neural networks have emerged as very successful tools for image restoration and reconstruction tasks.
no code implementations • 1 Jan 2021 • Zalan Fabian, Reinhard Heckel, Mahdi Soltanolkotabi
Inspired by the success of Data Augmentation (DA) for classification problems, in this paper, we propose a pipeline for data augmentation for image reconstruction tasks arising in medical imaging and explore its effectiveness at reducing the required training data in a variety of settings.
2 code implementations • NeurIPS 2020 • Seyed Mohammadreza Mousavi Kalan, Zalan Fabian, A. Salman Avestimehr, Mahdi Soltanolkotabi
In this approach a model trained for a source task, where plenty of labeled training data is available, is used as a starting point for training a model on a related target task with only few labeled training data.
no code implementations • 25 Sep 2019 • Samet Oymak, Zalan Fabian, Mingchen Li, Mahdi Soltanolkotabi
We show that over the information space learning is fast and one can quickly train a model with zero training loss that can also generalize well.
no code implementations • 12 Jun 2019 • Samet Oymak, Zalan Fabian, Mingchen Li, Mahdi Soltanolkotabi
We show that over the information space learning is fast and one can quickly train a model with zero training loss that can also generalize well.