no code implementations • 21 Oct 2023 • Mohammadreza Salehi, Mehrdad Farajtabar, Maxwell Horton, Fartash Faghri, Hadi Pouransari, Raviteja Vemulapalli, Oncel Tuzel, Ali Farhadi, Mohammad Rastegari, Sachin Mehta
While CLIP is scalable, promptable, and robust to distribution shifts on image classification tasks, it lacks object localization capabilities.
no code implementations • 18 Oct 2023 • Mohammadreza Salehi, Sachin Mehta, Aditya Kusupati, Ali Farhadi, Hannaneh Hajishirzi
We introduce SHARCS for adaptive inference that takes into account the hardness of input samples.
1 code implementation • ICCV 2023 • Mohammadreza Salehi, Efstratios Gavves, Cees G. M. Snoek, Yuki M. Asano
Our paper aims to address this gap by proposing a novel approach that incorporates temporal consistency in dense self-supervised learning.
1 code implementation • 25 Oct 2022 • Ali Garjani, Atoosa Malemir Chegini, Mohammadreza Salehi, Alireza Tabibzadeh, Parastoo Yousefi, Mohammad Hossein Razizadeh, Moein Esghaei, Maryam Esghaei, Mohammad Hossein Rohban
This helps the model to learn a shared unique representation between normal training samples as much as possible, which improves the discernibility and detectability of mutated samples from the unmutated ones at the test time.
1 code implementation • 9 Jun 2022 • Sina Taslimi, Soroush Taslimi, Nima Fathi, Mohammadreza Salehi, Mohammad Hossein Rohban
Our model has been tested with several number of MLP layers for the head setting, each achieves a competitive AUC score on all classes.
1 code implementation • 28 May 2022 • Hossein Mirzaei, Mohammadreza Salehi, Sajjad Shahabi, Efstratios Gavves, Cees G. M. Snoek, Mohammad Sabokrou, Mohammad Hossein Rohban
Effectiveness of our method for both the near-distribution and standard novelty detection is assessed through extensive experiments on datasets in diverse applications such as medical images, object classification, and quality control.
Ranked #2 on Anomaly Detection on One-class CIFAR-10 (using extra training data)
1 code implementation • 24 May 2022 • Akari Asai, Mohammadreza Salehi, Matthew E. Peters, Hannaneh Hajishirzi
Our method, called ATTEMPT (ATTEntional Mixtures of Prompt Tuning), obtains source prompts as encodings of large-scale source tasks into a small number of parameters and trains an attention module to interpolate the source prompts and a newly initialized target prompt for every instance in the target task.
no code implementations • CVPR 2022 • Rowan Zellers, Jiasen Lu, Ximing Lu, Youngjae Yu, Yanpeng Zhao, Mohammadreza Salehi, Aditya Kusupati, Jack Hessel, Ali Farhadi, Yejin Choi
Given a video, we replace snippets of text and audio with a MASK token; the model learns by choosing the correct masked-out snippet.
Ranked #6 on Action Classification on Kinetics-600 (using extra training data)
1 code implementation • 26 Oct 2021 • Mohammadreza Salehi, Hossein Mirzaei, Dan Hendrycks, Yixuan Li, Mohammad Hossein Rohban, Mohammad Sabokrou
To date, several research domains tackle the problem of detecting unfamiliar samples, including anomaly detection, novelty detection, one-class learning, open set recognition, and out-of-distribution detection.
no code implementations • 18 Mar 2021 • Masoud Pourreza, Mohammadreza Salehi, Mohammad Sabokrou
Video anomaly detection has proved to be a challenging task owing to its unsupervised training procedure and high spatio-temporal complexity existing in real-world scenarios.
3 code implementations • CVPR 2021 • Mohammadreza Salehi, Niousha Sadjadi, Soroosh Baselizadeh, Mohammad Hossein Rohban, Hamid R. Rabiee
Unsupervised representation learning has proved to be a critical component of anomaly detection/localization in images.
1 code implementation • 29 Aug 2020 • Mohammadreza Salehi, Ainaz Eftekhar, Niousha Sadjadi, Mohammad Hossein Rohban, Hamid R. Rabiee
Puzzle-solving, as a pretext task of self-supervised learning (SSL) methods, has earlier proved its ability in learning semantically meaningful features.
no code implementations • ACL 2020 • Amirhossein Kazemnejad, Mohammadreza Salehi, Mahdieh Soleymani Baghshah
With its novel editor module, the model then paraphrases the input sequence by editing it using the extracted relations between the retrieved pair of sentences.
1 code implementation • 12 Mar 2020 • Mohammadreza Salehi, Atrin Arya, Barbod Pajoum, Mohammad Otoofi, Amirreza Shaeiri, Mohammad Hossein Rohban, Hamid R. Rabiee
To address this problem, we propose a novel AE that can learn more semantically meaningful features.