Search Results for author: Tal Kachman

Found 14 papers, 5 papers with code

Using Cooperative Game Theory to Prune Neural Networks

no code implementations17 Nov 2023 Mauricio Diaz-Ortiz Jr, Benjamin Kempinski, Daphne Cornelisse, Yoram Bachrach, Tal Kachman

We show how solution concepts from cooperative game theory can be used to tackle the problem of pruning neural networks.

Generating and Imputing Tabular Data via Diffusion and Flow-based Gradient-Boosted Trees

2 code implementations18 Sep 2023 Alexia Jolicoeur-Martineau, Kilian Fatras, Tal Kachman

Through empirical evaluation across the benchmark, we demonstrate that our approach outperforms deep-learning generation methods in data generation tasks and remains competitive in data imputation.

Imputation

Explainability Techniques for Chemical Language Models

1 code implementation25 May 2023 Stefan Hödl, William Robinson, Yoram Bachrach, Wilhelm Huck, Tal Kachman

Explainability techniques are crucial in gaining insights into the reasons behind the predictions of deep learning models, which have not yet been applied to chemical language models.

Diffusion models with location-scale noise

no code implementations12 Apr 2023 Alexia Jolicoeur-Martineau, Kilian Fatras, Ke Li, Tal Kachman

Diffusion Models (DMs) are powerful generative models that add Gaussian noise to the data and learn to remove it.

Charting the Topography of the Neural Network Landscape with Thermal-Like Noise

no code implementations3 Apr 2023 Theo Jules, Gal Brener, Tal Kachman, Noam Levi, Yohai Bar-Sinai

The training of neural networks is a complex, high-dimensional, non-convex and noisy optimization problem whose theoretical understanding is interesting both from an applicative perspective and for fundamental reasons.

Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations Among Team Members

no code implementations18 Aug 2022 Daphne Cornelisse, Thomas Rood, Mateusz Malinowski, Yoram Bachrach, Tal Kachman

Cooperative game theory offers solution concepts identifying distribution schemes, such as the Shapley value, that fairly reflect the contribution of individuals to the performance of the team or the Core, which reduces the incentive of agents to abandon their team.

Lyapunov Exponents for Diversity in Differentiable Games

no code implementations24 Dec 2021 Jonathan Lorraine, Paul Vicol, Jack Parker-Holder, Tal Kachman, Luke Metz, Jakob Foerster

We generalize this idea to non-conservative, multi-agent gradient systems by proposing a method - denoted Generalized Ridge Rider (GRR) - for finding arbitrary bifurcation points.

Gradients are Not All You Need

1 code implementation10 Nov 2021 Luke Metz, C. Daniel Freeman, Samuel S. Schoenholz, Tal Kachman

Differentiable programming techniques are widely used in the community and are responsible for the machine learning renaissance of the past several decades.

Gotta Go Fast with Score-Based Generative Models

no code implementations NeurIPS Workshop DLDE 2021 Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, Ioannis Mitliagkas

Score-based (denoising diffusion) generative models have recently gained a lot of success in generating realistic and diverse data.

Denoising

Gotta Go Fast When Generating Data with Score-Based Models

1 code implementation28 May 2021 Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, Ioannis Mitliagkas

For high-resolution images, our method leads to significantly higher quality samples than all other methods tested.

Ranked #8 on Image Generation on CIFAR-10 (Inception score metric)

Image Generation

Novel Uncertainty Framework for Deep Learning Ensembles

no code implementations9 Apr 2019 Tal Kachman, Michal Moshkovitz, Michal Rosen-Zvi

Deep neural networks have become the default choice for many of the machine learning tasks such as classification and regression.

BIG-bench Machine Learning Gaussian Processes +2

Computer vision-based recognition of liquid surfaces and phase boundaries in transparent vessels, with emphasis on chemistry applications

no code implementations28 Apr 2014 Sagi Eppel, Tal Kachman

The method then compares each curve to the image to rate its correspondence with the outline of the real liquid surface by examining various image properties in the area surrounding each point of the curve.

Cannot find the paper you are looking for? You can Submit a new open access paper.