1 code implementation • 29 May 2023 • Mustafa Burak Gurbuz, Jean Michael Moorman, Constantine Dovrolis
Inspired by how our brain consolidates memories, a powerful strategy in CL is replay, which involves training the DNN on a mixture of new and all seen classes.
no code implementations • 16 Jul 2022 • Qihang Yao, Manoj Chandrasekaran, Constantine Dovrolis
The question we focus on is: if we are given such activation cascades for two groups, say A and B (e. g. Controls versus a mental disorder), what is the smallest set of brain connectivity (graph edge weight) changes that are sufficient to explain the observed differences in the activation cascades between the two groups?
1 code implementation • 18 Jun 2022 • Mustafa Burak Gurbuz, Constantine Dovrolis
The goal of continual learning (CL) is to learn different tasks over time.
no code implementations • 1 Jan 2021 • Shreyas Malakarjun Patil, Constantine Dovrolis
Then, we show that Paths with Higher Edge-Weights (PHEW) at initialization have higher loss gradient magnitude, resulting in more efficient training.
1 code implementation • 22 Oct 2020 • Shreyas Malakarjun Patil, Constantine Dovrolis
Our work is based on a recently proposed decomposition of the Neural Tangent Kernel (NTK) that has decoupled the dynamics of the training process into a data-dependent component and an architecture-dependent kernel - the latter referred to as Path Kernel.
no code implementations • 6 Jun 2019 • Payam Siyari, Bistra Dilkina, Constantine Dovrolis
Contrary to Evo-Lexis, in iGEM the amount of reuse decreases during the timeline of the dataset.
1 code implementation • 3 Apr 2019 • James Smith, Cameron Taylor, Seth Baer, Constantine Dovrolis
We first pose the Unsupervised Progressive Learning (UPL) problem: an online representation learning problem in which the learner observes a non-stationary and unlabeled data stream, learning a growing number of features that persist over time even though the data is not stored or replayed.
no code implementations • ICLR Workshop LLD 2019 • James Smith, Seth Baer, Zsolt Kira, Constantine Dovrolis
We first pose the Unsupervised Continual Learning (UCL) problem: learning salient representations from a non-stationary stream of unlabeled data in which the number of object classes varies with time.
no code implementations • 22 Oct 2018 • Constantine Dovrolis
We propose that the Continual Learning desiderata can be achieved through a neuro-inspired architecture, grounded on Mountcastle's cortical column hypothesis.
1 code implementation • 13 May 2018 • Payam Siyari, Bistra Dilkina, Constantine Dovrolis
It is well known that many complex systems, both in technology and nature, exhibit hierarchical modularity: smaller modules, each of them providing a certain function, are used within larger modules that perform more complex functions.
1 code implementation • 23 Feb 2016 • Ilias Fountalis, Annalisa Bracco, Bistra Dilkina, Constantine Dovrolis, Shella Keilholz
The proposed edge inference method examines the statistical significance of each lagged cross-correlation between two domains, infers a range of lag values for each edge, and assigns a weight to each edge based on the covariance of the two domains.
Other Computer Science
no code implementations • 17 Feb 2016 • Payam Siyari, Bistra Dilkina, Constantine Dovrolis
We also consider the problem of identifying the set of intermediate nodes (substrings) that collectively form the "core" of a Lexis-DAG, which is important in the analysis of Lexis-DAGs.