no code implementations • 17 Nov 2023 • Mauricio Diaz-Ortiz Jr, Benjamin Kempinski, Daphne Cornelisse, Yoram Bachrach, Tal Kachman
We show how solution concepts from cooperative game theory can be used to tackle the problem of pruning neural networks.
2 code implementations • 18 Sep 2023 • Alexia Jolicoeur-Martineau, Kilian Fatras, Tal Kachman
Through empirical evaluation across the benchmark, we demonstrate that our approach outperforms deep-learning generation methods in data generation tasks and remains competitive in data imputation.
1 code implementation • 25 May 2023 • Stefan Hödl, William Robinson, Yoram Bachrach, Wilhelm Huck, Tal Kachman
Explainability techniques are crucial in gaining insights into the reasons behind the predictions of deep learning models, which have not yet been applied to chemical language models.
no code implementations • 12 Apr 2023 • Alexia Jolicoeur-Martineau, Kilian Fatras, Ke Li, Tal Kachman
Diffusion Models (DMs) are powerful generative models that add Gaussian noise to the data and learn to remove it.
no code implementations • 3 Apr 2023 • Theo Jules, Gal Brener, Tal Kachman, Noam Levi, Yohai Bar-Sinai
The training of neural networks is a complex, high-dimensional, non-convex and noisy optimization problem whose theoretical understanding is interesting both from an applicative perspective and for fundamental reasons.
1 code implementation • 2 Oct 2022 • Yannick Hogewind, Thiago D. Simao, Tal Kachman, Nils Jansen
We address the problem of safe reinforcement learning from pixel observations.
no code implementations • 18 Aug 2022 • Daphne Cornelisse, Thomas Rood, Mateusz Malinowski, Yoram Bachrach, Tal Kachman
Cooperative game theory offers solution concepts identifying distribution schemes, such as the Shapley value, that fairly reflect the contribution of individuals to the performance of the team or the Core, which reduces the incentive of agents to abandon their team.
no code implementations • 24 Dec 2021 • Jonathan Lorraine, Paul Vicol, Jack Parker-Holder, Tal Kachman, Luke Metz, Jakob Foerster
We generalize this idea to non-conservative, multi-agent gradient systems by proposing a method - denoted Generalized Ridge Rider (GRR) - for finding arbitrary bifurcation points.
1 code implementation • 10 Nov 2021 • Luke Metz, C. Daniel Freeman, Samuel S. Schoenholz, Tal Kachman
Differentiable programming techniques are widely used in the community and are responsible for the machine learning renaissance of the past several decades.
no code implementations • NeurIPS Workshop DLDE 2021 • Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, Ioannis Mitliagkas
Score-based (denoising diffusion) generative models have recently gained a lot of success in generating realistic and diverse data.
1 code implementation • 28 May 2021 • Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, Ioannis Mitliagkas
For high-resolution images, our method leads to significantly higher quality samples than all other methods tested.
Ranked #8 on Image Generation on CIFAR-10 (Inception score metric)
no code implementations • 9 Apr 2019 • Tal Kachman, Michal Moshkovitz, Michal Rosen-Zvi
Deep neural networks have become the default choice for many of the machine learning tasks such as classification and regression.
no code implementations • 30 May 2018 • Vadim Ratner, Yoel Shoshan, Tal Kachman
Medical image classification involves thresholding of labels that represent malignancy risk levels.
no code implementations • 28 Apr 2014 • Sagi Eppel, Tal Kachman
The method then compares each curve to the image to rate its correspondence with the outline of the real liquid surface by examining various image properties in the area surrounding each point of the curve.