Search Results for author: Mohamad H. Danesh

Found 7 papers, 5 papers with code

Taming the Tail in Class-Conditional GANs: Knowledge Sharing via Unconditional Training at Lower Resolutions

1 code implementation26 Feb 2024 Saeed Khorram, Mingqi Jiang, Mohamad Shahbazi, Mohamad H. Danesh, Li Fuxin

In the presence of imbalanced multi-class training data, GANs tend to favor classes with more samples, leading to the generation of low-quality and less diverse samples in tail classes.

Contextual Pre-planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning

1 code implementation11 Jul 2023 Guy Azran, Mohamad H. Danesh, Stefano V. Albrecht, Sarah Keren

Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes.

LEADER: Learning Attention over Driving Behaviors for Planning under Uncertainty

1 code implementation23 Sep 2022 Mohamad H. Danesh, Panpan Cai, David Hsu

To address this, we propose a new algorithm, LEarning Attention over Driving bEhavioRs (LEADER), that learns to attend to critical human behaviors during planning.

Autonomous Driving

Stochastic Block-ADMM for Training Deep Networks

no code implementations1 May 2021 Saeed Khorram, Xiao Fu, Mohamad H. Danesh, Zhongang Qi, Li Fuxin

We prove the convergence of our proposed method and justify its capabilities through experiments in supervised and weakly-supervised settings.

Reducing Neural Network Parameter Initialization Into an SMT Problem

no code implementations2 Nov 2020 Mohamad H. Danesh

Training a neural network (NN) depends on multiple factors, including but not limited to the initial weights.

Re-understanding Finite-State Representations of Recurrent Policy Networks

1 code implementation6 Jun 2020 Mohamad H. Danesh, Anurag Koul, Alan Fern, Saeed Khorram

We introduce an approach for understanding control policies represented as recurrent neural networks.

Atari Games

Cannot find the paper you are looking for? You can Submit a new open access paper.