Search Results for author: Mohammad Rostami

Found 49 papers, 16 papers with code

An Intermediate Fusion ViT Enables Efficient Text-Image Alignment in Diffusion Models

no code implementations25 Mar 2024 Zizhao Hu, Shaochong Jia, Mohammad Rostami

Diffusion models have been widely used for conditional data cross-modal generation tasks such as text-to-image and text-to-video.

Text-to-Image Generation

Cross-domain Multi-modal Few-shot Object Detection via Rich Text

1 code implementation24 Mar 2024 Zeyu Shangguan, Daniel Seita, Mohammad Rostami

Cross-modal feature extraction and integration have led to steady performance improvements in few-shot learning tasks due to generating richer features.

Cross-Domain Few-Shot Domain Adaptation +3

CRISPR: Ensemble Model

no code implementations5 Mar 2024 Mohammad Rostami, Amin Ghariyazi, Hamed Dashti, Mohammad Hossein Rohban, Hamid R. Rabiee

This is because most existing methods are trained on separate datasets with different genes and cells, which limits their generalizability.

Ensemble Learning Specificity

Meta-Tasks: An alternative view on Meta-Learning Regularization

no code implementations27 Feb 2024 Mohammad Rostami, Atik Faysal, Huaxia Wang, Avimanyu Sahoo, Ryan Antle

The ability to generalize effectively on both novel and training tasks is a significant barrier to FSL.

Few-Shot Learning

Continuous Unsupervised Domain Adaptation Using Stabilized Representations and Experience Replay

1 code implementation31 Jan 2024 Mohammad Rostami

Our solution is based on stabilizing the learned internal distribution to enhances the model generalization on new domains.

Continual Learning Unsupervised Domain Adaptation

Dynamic Transformer Architecture for Continual Learning of Multimodal Tasks

no code implementations27 Jan 2024 Yuliang Cai, Mohammad Rostami

We propose a transformer-based CL framework focusing on learning tasks that involve both vision and language, known as Vision-and-Language (VaL) tasks.

Continual Learning Edge-computing +1

Unsupervised Domain Adaptation Using Compact Internal Representations

no code implementations14 Jan 2024 Mohammad Rostami

To further enhance the performance of unsupervised domain adaptation (UDA), we develop an additional technique which makes the internal distribution of the source domain more compact, thereby improving the model's ability to generalize in the target domain. We demonstrate that by increasing the margins between data representations for different classes in the embedding space, we can improve the model performance for UDA.

Unsupervised Domain Adaptation

Relating Events and Frames Based on Self-Supervised Learning and Uncorrelated Conditioning for Unsupervised Domain Adaptation

no code implementations2 Jan 2024 Mohammad Rostami, Dayuan Jian

By applying self-supervised learning, the algorithm learns to align the representations of event-based data with those from frame-based camera data, thereby facilitating knowledge transfer. Furthermore, the inclusion of uncorrelated conditioning ensures that the adapted model effectively distinguishes between event-based and conventional data, enhancing its ability to classify event-based images accurately. Through empirical experimentation and evaluation, we demonstrate that our algorithm surpasses existing approaches designed for the same purpose using two benchmarks.

Event-based vision Self-Supervised Learning +1

Unsupervised Federated Domain Adaptation for Segmentation of MRI Images

no code implementations2 Jan 2024 Navapat Nananukul, Hamid Soltanian-Zadeh, Mohammad Rostami

Our approach enables the transfer of knowledge from several annotated source domains to adapt a model for effective use in an unannotated target domain.

Domain Adaptation Semantic Segmentation

Online Continual Domain Adaptation for Semantic Image Segmentation Using Internal Representations

1 code implementation2 Jan 2024 Serban Stan, Mohammad Rostami

Semantic segmentation models trained on annotated data fail to generalize well when the input data distribution changes over extended time period, leading to requiring re-training to maintain performance.

Image Segmentation Segmentation +2

Efficient Multimodal Diffusion Models Using Joint Data Infilling with Partially Shared U-Net

no code implementations28 Nov 2023 Zizhao Hu, Shaochong Jia, Mohammad Rostami

Recently, diffusion models have been used successfully to fit distributions for cross-modal data translation and multimodal data generation.

Image Inpainting

Robust Internal Representations for Domain Generalization

no code implementations27 Sep 2023 Mohammad Rostami

This paper which is part of the New Faculty Highlights Invited Speaker Program of AAAI'23, serves as a comprehensive survey of my research in transfer learning by utilizing embedding spaces.

Continual Learning Domain Generalization +3

Improved Region Proposal Network for Enhanced Few-Shot Object Detection

1 code implementation15 Aug 2023 Zeyu Shangguan, Mohammad Rostami

Specifically, we develop a hierarchical ternary classification region proposal network (HTRPN) to localize the potential unlabeled novel objects and assign them new objectness labels to distinguish these objects from the base training dataset classes.

Few-Shot Object Detection Object +3

Cognitively Inspired Cross-Modal Data Generation Using Diffusion Models

no code implementations28 May 2023 Zizhao Hu, Mohammad Rostami

Most existing cross-modal generative methods based on diffusion models use guidance to provide control over the latent space to enable conditional generation across different modalities.

Low-Shot Learning for Fictional Claim Verification

1 code implementation5 Apr 2023 Viswanath Chadalapaka, Derek Nguyen, Joonwon Choi, Shaunak Joshi, Mohammad Rostami

In this paper, we study the problem of claim verification in the context of claims about fictional stories in a low-shot learning setting.

Claim Verification

I2I: Initializing Adapters with Improvised Knowledge

1 code implementation4 Apr 2023 Tejas Srinivasan, Furong Jia, Mohammad Rostami, Jesse Thomason

We propose Improvise to Initialize (I2I), a continual learning algorithm that initializes Adapters for incoming tasks by distilling knowledge from previously-learned tasks' Adapters.

Continual Learning Question Answering +2

Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation

no code implementations25 Mar 2023 Yuliang Cai, Jesse Thomason, Mohammad Rostami

The size and the computational load of fine-tuning large-scale pre-trained neural network are becoming two major obstacles in adopting machine learning in many applications.

Continual Learning Knowledge Distillation +1

Encoding Binary Concepts in the Latent Space of Generative Models for Enhancing Data Representation

1 code implementation22 Mar 2023 Zizhao Hu, Mohammad Rostami

We propose a novel binarized regularization to facilitate learning of binary concepts to improve the quality of data generation in autoencoders.

Continual Learning Disentanglement

Unsupervised Domain Adaptation for Training Event-Based Networks Using Contrastive Learning and Uncorrelated Conditioning

no code implementations ICCV 2023 Dayuan Jian, Mohammad Rostami

Event-based cameras offer reliable measurements for preforming computer vision tasks in high-dynamic range environments and during fast motion maneuvers.

Contrastive Learning Event-based vision +2

Identification of Novel Classes for Improving Few-Shot Object Detection

1 code implementation18 Mar 2023 Zeyu Shangguan, Mohammad Rostami

Our improved hierarchical sampling strategy for the region proposal network (RPN) also boosts the perception ability of the object detection model for large objects.

Few-Shot Object Detection Object +3

Preserving Fairness in AI under Domain Shift

no code implementations29 Jan 2023 Serban Stan, Mohammad Rostami

Our algorithm is based on updating the model such that the internal representation of data remains unbiased despite distributional shifts in the input space.

Fairness Unsupervised Domain Adaptation

Unsupervised Model Adaptation for Source-free Segmentation of Medical Images

no code implementations2 Nov 2022 Serban Stan, Mohammad Rostami

We rely on an approximation of the source latent features at adaptation time, and create a joint source/target embedding space by minimizing a distributional distance metric based on optimal transport.

Image Segmentation Medical Image Segmentation +3

Increasing Model Generalizability for Unsupervised Domain Adaptation

no code implementations29 Sep 2022 Mohammad Rostami

A dominant approach for addressing unsupervised domain adaptation is to map data points for the source and the target domains into an embedding space which is modeled as the output-space of a shared deep encoder.

Image Classification Unsupervised Domain Adaptation

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

1 code implementation18 Jun 2022 Tejas Srinivasan, Ting-Yun Chang, Leticia Leonor Pinto Alva, Georgios Chochlakis, Mohammad Rostami, Jesse Thomason

Existing CL benchmarks have facilitated research on task adaptation and mitigating "catastrophic forgetting", but are limited to vision-only and language-only tasks.

Continual Learning Transfer Learning

Cognitively Inspired Learning of Incremental Drifting Concepts

no code implementations9 Oct 2021 Mohammad Rostami, Aram Galstyan

Humans continually expand their learned knowledge to new domains and learn new concepts without any interference with past learned experiences.

Continual Learning

Detection and Continual Learning of Novel Face Presentation Attacks

no code implementations ICCV 2021 Mohammad Rostami, Leonidas Spinoulas, Mohamed Hussein, Joe Mathai, Wael Abd-Almageed

Advances in deep learning, combined with availability of large datasets, have led to impressive improvements in face presentation attack detection research.

Continual Learning Face Presentation Attack Detection

Domain Adaptation for Sentiment Analysis Using Increased Intraclass Separation

no code implementations4 Jul 2021 Mohammad Rostami, Aram Galstyan

We introduce a new domain adaptation method which induces large margins between different classes in an embedding space.

Domain Adaptation Marketing +1

Secure Domain Adaptation with Multiple Sources

1 code implementation23 Jun 2021 Serban Stan, Mohammad Rostami

Multi-source unsupervised domain adaptation (MUDA) is a framework to address the challenge of annotated data scarcity in a target domain via transferring knowledge from multiple annotated source domains.

Multi-Source Unsupervised Domain Adaptation Unsupervised Domain Adaptation

Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning

1 code implementation Findings (EMNLP) 2021 Xisen Jin, Bill Yuchen Lin, Mohammad Rostami, Xiang Ren

The ability to continuously expand knowledge over time and utilize it to rapidly generalize to new tasks is a key feature of human linguistic intelligence.

Continual Learning Few-Shot Learning +2

Domain Adaptation for the Segmentation of Confidential Medical Images

1 code implementation2 Jan 2021 Serban Stan, Mohammad Rostami

In this work, we develop an algorithm for UDA where the source domain data is inaccessible during target adaptation.

Image Segmentation Privacy Preserving +3

Learning a Max-Margin Classifier for Cross-Domain Sentiment Analysis

no code implementations1 Jan 2021 Mohammad Rostami, Aram Galstyan

Large margins in the source domain help to reduce the effect of ``domain shift'' on the performance of a trained classifier in the target domain.

Domain Adaptation Marketing +1

One-shot Learning for Temporal Knowledge Graphs

no code implementations AKBC 2021 Mehrnoosh Mirtaheri, Mohammad Rostami, Xiang Ren, Fred Morstatter, Aram Galstyan

Most real-world knowledge graphs are characterized by a long-tail relation frequency distribution where a significant fraction of relations occurs only a handful of times.

Knowledge Graphs Link Prediction +2

Unsupervised Model Adaptation for Continual Semantic Segmentation

1 code implementation26 Sep 2020 Serban Stan, Mohammad Rostami

We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain.

Continual Semantic Segmentation Semantic Segmentation +1

Overcoming Concept Shift in Domain-Aware Settings through Consolidated Internal Distributions

1 code implementation1 Jul 2020 Mohammad Rostami, Aram Galstyan

We develop an algorithm to improve the performance of a pre-trained model under concept shift without retraining the model from scratch when only unannotated samples of initial concepts are accessible.

Transfer Learning Unsupervised Domain Adaptation

Learning a Domain-Invariant Embedding for Unsupervised Domain Adaptation Using Class-Conditioned Distribution Alignment

no code implementations4 Jul 2019 Alex Gabourie, Mohammad Rostami, Philip Pope, Soheil Kolouri, Kyungnam Kim

We address the problem of unsupervised domain adaptation (UDA) by learning a cross-domain agnostic embedding space, where the distance between the probability distributions of the two source and target visual domains is minimized.

Unsupervised Domain Adaptation

Generative Continual Concept Learning

no code implementations10 Jun 2019 Mohammad Rostami, Soheil Kolouri, James McClelland, Praveen Pilly

After learning a concept, humans are also able to continually generalize their learned concepts to new domains by observing only a few labeled instances without any interference with the past learned knowledge.

Continual Learning

Zero-Shot Image Classification Using Coupled Dictionary Embedding

no code implementations10 Jun 2019 Mohammad Rostami, Soheil Kolouri, Zak Murez, Yuri Owekcho, Eric Eaton, Kuyngnam Kim

Zero-shot learning (ZSL) is a framework to classify images belonging to unseen classes based on solely semantic information about these unseen classes.

Attribute Classification +5

Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay

no code implementations11 Mar 2019 Mohammad Rostami, Soheil Kolouri, Praveen K. Pilly

We sample from this distribution and utilize experience replay to avoid forgetting and simultaneously accumulate new knowledge to the abstract distribution in order to couple the current task with past experience.

Using Task Descriptions in Lifelong Machine Learning for Improved Performance and Zero-Shot Transfer

no code implementations10 Oct 2017 David Isele, Mohammad Rostami, Eric Eaton

Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of the inter-task relationships to identify the relevant knowledge to transfer.

BIG-bench Machine Learning Dictionary Learning +2

Multi-Agent Distributed Lifelong Learning for Collective Knowledge Acquisition

no code implementations15 Sep 2017 Mohammad Rostami, Soheil Kolouri, Kyungnam Kim, Eric Eaton

Lifelong machine learning methods acquire knowledge over a series of consecutive tasks, continually building upon their experience.

Multi-Task Learning

Joint Dictionaries for Zero-Shot Learning

no code implementations12 Sep 2017 Soheil Kolouri, Mohammad Rostami, Yuri Owechko, Kyungnam Kim

A classic approach toward zero-shot learning (ZSL) is to map the input domain to a set of semantically meaningful attributes that could be used later on to classify unseen classes of data (e. g. visual data).

Attribute Dictionary Learning +1

Image Super-Resolution Based on Sparsity Prior via Smoothed $l_0$ Norm

no code implementations22 Mar 2016 Mohammad Rostami, Zhou Wang

However, sparse representation of a signal over a known dictionary is an ill-posed, combinatorial optimization problem.

Combinatorial Optimization Image Super-Resolution +1

Cannot find the paper you are looking for? You can Submit a new open access paper.