1 code implementation • 26 Feb 2024 • Saeed Khorram, Mingqi Jiang, Mohamad Shahbazi, Mohamad H. Danesh, Li Fuxin
In the presence of imbalanced multi-class training data, GANs tend to favor classes with more samples, leading to the generation of low-quality and less diverse samples in tail classes.
1 code implementation • 5 Feb 2024 • Shengyi Huang, Quentin Gallouédec, Florian Felten, Antonin Raffin, Rousslan Fernand Julien Dossa, Yanxiao Zhao, Ryan Sullivan, Viktor Makoviychuk, Denys Makoviichuk, Mohamad H. Danesh, Cyril Roumégous, Jiayi Weng, Chufan Chen, Md Masudur Rahman, João G. M. Araújo, Guorui Quan, Daniel Tan, Timo Klein, Rujikorn Charakorn, Mark Towers, Yann Berthelot, Kinal Mehta, Dipam Chakraborty, Arjun KG, Valentin Charraut, Chang Ye, Zichen Liu, Lucas N. Alegre, Alexander Nikulin, Xiao Hu, Tianlin Liu, Jongwook Choi, Brent Yi
As a result, it is usually necessary to reproduce the experiments from scratch, which can be time-consuming and error-prone.
1 code implementation • 11 Jul 2023 • Guy Azran, Mohamad H. Danesh, Stefano V. Albrecht, Sarah Keren
Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes.
1 code implementation • 23 Sep 2022 • Mohamad H. Danesh, Panpan Cai, David Hsu
To address this, we propose a new algorithm, LEarning Attention over Driving bEhavioRs (LEADER), that learns to attend to critical human behaviors during planning.
no code implementations • 1 May 2021 • Saeed Khorram, Xiao Fu, Mohamad H. Danesh, Zhongang Qi, Li Fuxin
We prove the convergence of our proposed method and justify its capabilities through experiments in supervised and weakly-supervised settings.
no code implementations • 2 Nov 2020 • Mohamad H. Danesh
Training a neural network (NN) depends on multiple factors, including but not limited to the initial weights.
1 code implementation • 6 Jun 2020 • Mohamad H. Danesh, Anurag Koul, Alan Fern, Saeed Khorram
We introduce an approach for understanding control policies represented as recurrent neural networks.