no code implementations • 2 Oct 2023 • Giulia Lanzillotta, Sidak Pal Singh, Benjamin F. Grewe, Thomas Hofmann
Deep learning has proved to be a successful paradigm for solving many challenges in machine learning.
no code implementations • 11 Jul 2023 • Peng Yan, Ahmed Abdulkadir, Paul-Philipp Luley, Matthias Rosenthal, Gerrit A. Schatte, Benjamin F. Grewe, Thilo Stadelmann
However, due to the dynamic nature of the industrial processes and environment, it is impractical to acquire large-scale labeled data for standard deep learning training for every slightly different case anew.
no code implementations • 9 May 2023 • Yunke Ao, Hooman Esfandiari, Fabio Carrillo, Yarden As, Mazda Farshad, Benjamin F. Grewe, Andreas Krause, Philipp Fuernstahl
Spinal fusion surgery requires highly accurate implantation of pedicle screw implants, which must be conducted in critical proximity to vital structures with a limited view of anatomy.
no code implementations • 8 Dec 2022 • Francesco Lässig, Pau Vilimelis Aceituno, Martino Sorbaro, Benjamin F. Grewe
We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation.
1 code implementation • 17 Oct 2022 • Elvis Nava, Seijin Kobayashi, Yifei Yin, Robert K. Katzschmann, Benjamin F. Grewe
Our methods repurpose the popular generative image synthesis techniques of natural language guidance and diffusion models to generate neural network weights adapted for tasks.
1 code implementation • 25 Jul 2022 • Hamza Keurti, Hsiao-Ru Pan, Michel Besserve, Benjamin F. Grewe, Bernhard Schölkopf
How can agents learn internal models that veridically represent interactions with the real world is a largely open question.
no code implementations • 22 Apr 2022 • Christoph von der Malsburg, Thilo Stadelmann, Benjamin F. Grewe
Introduction: In contrast to current AI technology, natural intelligence -- the kind of autonomous intelligence that is realized in the brains of animals and humans to attain in their natural environment goals defined by a repertoire of innate behavioral schemata -- is far superior in terms of learning speed, generalization capabilities, autonomy and creativity.
2 code implementations • 14 Apr 2022 • Alexander Meulemans, Matilde Tristany Farinha, Maria R. Cervera, João Sacramento, Benjamin F. Grewe
Building upon deep feedback control (DFC), a recently proposed credit assignment method, we combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
no code implementations • 30 Mar 2022 • Elvis Nava, John Z. Zhang, Mike Y. Michelis, Tao Du, Pingchuan Ma, Benjamin F. Grewe, Wojciech Matusik, Robert K. Katzschmann
For the deformable solid simulation of the swimmer's body, we use state-of-the-art techniques from the field of computer graphics to speed up the finite-element method (FEM).
1 code implementation • 23 Nov 2021 • Maria R. Cervera, Rafael Dätwyler, Francesco D'Angelo, Hamza Keurti, Benjamin F. Grewe, Christian Henning
Although neural networks are powerful function approximators, the underlying modelling assumptions ultimately define the likelihood and thus the hypothesis class they are parameterizing.
1 code implementation • 26 Jul 2021 • Christian Henning, Francesco D'Angelo, Benjamin F. Grewe
The need to avoid confident predictions on unfamiliar data has sparked interest in out-of-distribution (OOD) detection.
3 code implementations • NeurIPS 2021 • Alexander Meulemans, Matilde Tristany Farinha, Javier García Ordóñez, Pau Vilimelis Aceituno, João Sacramento, Benjamin F. Grewe
The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output.
3 code implementations • NeurIPS 2021 • Christian Henning, Maria R. Cervera, Francesco D'Angelo, Johannes von Oswald, Regina Traber, Benjamin Ehret, Seijin Kobayashi, Benjamin F. Grewe, João Sacramento
We offer a practical deep learning implementation of our framework based on probabilistic task-conditioned hypernetworks, an approach we term posterior meta-replay.
2 code implementations • ICLR 2021 • Johannes von Oswald, Seijin Kobayashi, Alexander Meulemans, Christian Henning, Benjamin F. Grewe, João Sacramento
The largely successful method of training neural networks is to learn their weights using some variant of stochastic gradient descent (SGD).
Ranked #70 on Image Classification on CIFAR-100 (using extra training data)
2 code implementations • NeurIPS 2020 • Alexander Meulemans, Francesco S. Carzaniga, Johan A. K. Suykens, João Sacramento, Benjamin F. Grewe
Here, we analyze target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization.
3 code implementations • ICLR 2021 • Benjamin Ehret, Christian Henning, Maria R. Cervera, Alexander Meulemans, Johannes von Oswald, Benjamin F. Grewe
Here, we provide the first comprehensive evaluation of established CL methods on a variety of sequential data benchmarks.
7 code implementations • ICLR 2020 • Johannes von Oswald, Christian Henning, Benjamin F. Grewe, João Sacramento
Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks.
Ranked #4 on Continual Learning on F-CelebA (10 tasks)