Search Results for author: Max Dabagia

Found 8 papers, 5 papers with code

Computation with Sequences in a Model of the Brain

no code implementations6 Jun 2023 Max Dabagia, Christos H. Papadimitriou, Santosh S. Vempala

Here we show that, in the same model, time can be captured naturally as precedence through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out.

Mathematical Proofs Memorization +2

Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers

1 code implementation10 Jun 2022 Ran Liu, Mehdi Azabou, Max Dabagia, Jingyun Xiao, Eva L. Dyer

By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding.

Time Series Time Series Analysis

Comparing high-dimensional neural recordings by aligning their low-dimensional latent representations

no code implementations17 May 2022 Max Dabagia, Konrad P Kording, Eva L Dyer

One major challenge that we face in modern neuroscience is that of correspondence, e. g. we do not record the exact same neurons at the exact same times.

Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity

1 code implementation NeurIPS 2021 Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith B. Hengen, Michal Valko, Eva L. Dyer

Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state).

Assemblies of neurons learn to classify well-separated distributions

1 code implementation7 Oct 2021 Max Dabagia, Christos H. Papadimitriou, Santosh S. Vempala

Here we present such a mechanism, and prove rigorously that, for simple classification problems defined on distributions of labeled assemblies, a new assembly representing each class can be reliably formed in response to a few stimuli from the class; this assembly is henceforth reliably recalled in response to new stimuli from the same class.

Learning with Plasticity Rules: Generalization and Robustness

no code implementations1 Jan 2021 Rares C Cristian, Max Dabagia, Christos Papadimitriou, Santosh Vempala

Here we hypothesize that (a) Brains employ synaptic plasticity rules that serve as proxies for GD; (b) These rules themselves can be learned by GD on the rule parameters; and (c) This process may be a missing ingredient for the development of ANNs that generalize well and are robust to adversarial perturbations.

Hierarchical Optimal Transport for Multimodal Distribution Alignment

2 code implementations NeurIPS 2019 John Lee, Max Dabagia, Eva L. Dyer, Christopher J. Rozell

Our results demonstrate that when clustered structure exists in datasets, and is consistent across trials or time points, a hierarchical alignment strategy that leverages such structure can provide significant improvements in cross-domain alignment.

Cannot find the paper you are looking for? You can Submit a new open access paper.