1 code implementation • 9 Feb 2024 • Florian Peter Busch, Roshni Kamath, Rupert Mitchell, Wolfgang Stammer, Kristian Kersting, Martin Mundt
A dataset is confounded if it is most easily solved via a spurious correlation which fails to generalize to new data.
1 code implementation • 7 Feb 2024 • Roshni Kamath, Rupert Mitchell, Subarnaduti Paul, Kristian Kersting, Martin Mundt
The quest to improve scalar performance numbers on predetermined benchmarks seems to be deeply engraved in deep learning.
no code implementations • 20 Nov 2023 • Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost Van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven
Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
no code implementations • 18 Sep 2023 • Achref Jaziri, Martin Mundt, Andres Fernandez Rodriguez, Visvanathan Ramesh
Identification of cracks is essential to assess the structural integrity of concrete infrastructure.
1 code implementation • 10 Jul 2023 • Rupert Mitchell, Robin Menzenbach, Kristian Kersting, Martin Mundt
The results of training a neural network are heavily dependent on the architecture chosen; and even a modification of only its size, however small, typically involves restarting the training process.
1 code implementation • 6 Jun 2023 • Subarnaduti Paul, Lars-Joel Frey, Roshni Kamath, Kristian Kersting, Martin Mundt
In parts, federated learning lifts this assumption, as it sets out to solve the real-world challenge of collaboratively learning a shared model from data distributed across clients.
1 code implementation • 3 Jun 2023 • Steven Braun, Martin Mundt, Kristian Kersting
We posit that original data access may however not be required.
no code implementations • 29 Mar 2023 • Organizers Of QueerInAI, :, Anaelia Ovalle, Arjun Subramonian, Ashwin Singh, Claas Voelcker, Danica J. Sutherland, Davide Locatelli, Eva Breznik, Filip Klubička, Hang Yuan, Hetvi J, huan zhang, Jaidev Shriram, Kruno Lehman, Luca Soldaini, Maarten Sap, Marc Peter Deisenroth, Maria Leonor Pacheco, Maria Ryskina, Martin Mundt, Milind Agarwal, Nyx McLean, Pan Xu, A Pranav, Raj Korpan, Ruchira Ray, Sarah Mathew, Sarthak Arora, ST John, Tanvi Anand, Vishakha Agrawal, William Agnew, Yanan Long, Zijie J. Wang, Zeerak Talat, Avijit Ghosh, Nathaniel Dennler, Michael Noseworthy, Sharvani Jha, Emi Baylor, Aditya Joshi, Natalia Y. Bilenko, Andrew McNamara, Raphael Gontijo-Lopes, Alex Markham, Evyn Dǒng, Jackie Kay, Manu Saraswat, Nikhil Vytla, Luke Stark
We present Queer in AI as a case study for community-led participatory design in AI.
2 code implementations • 13 Feb 2023 • Fabrizio Ventola, Steven Braun, Zhongjie Yu, Martin Mundt, Kristian Kersting
In contrast to neural networks, they are often assumed to be well-calibrated and robust to out-of-distribution (OOD) data.
no code implementations • 24 Jun 2022 • Jonas Seng, Pooja Prasad, Martin Mundt, Devendra Singh Dhami, Kristian Kersting
Deep neural architectures have profound impact on achieved performance in many of today's AI tasks, yet, their design still heavily relies on human prior knowledge and experience.
1 code implementation • ICLR 2022 • Martin Mundt, Steven Lang, Quentin Delfosse, Kristian Kersting
What is the state of the art in continual machine learning?
2 code implementations • 4 Jun 2021 • Timm Hess, Martin Mundt, Iuliia Pliushch, Visvanathan Ramesh
Several families of continual learning techniques have been proposed to alleviate catastrophic interference in deep neural network training on non-stationary data.
1 code implementation • 19 May 2021 • Iuliia Pliushch, Martin Mundt, Nicolas Lupp, Visvanathan Ramesh
Although a plethora of architectural variants for deep classification has been introduced over time, recent works have found empirical evidence towards similarities in their training process.
1 code implementation • 14 Apr 2021 • Martin Mundt, Iuliia Pliushch, Visvanathan Ramesh
In this paper we analyze the classification performance of neural network structures without parametric inference.
4 code implementations • 1 Apr 2021 • Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost Van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.
4 code implementations • 18 Feb 2021 • Quentin Delfosse, Patrick Schramowski, Martin Mundt, Alejandro Molina, Kristian Kersting
Latest insights from biology show that intelligence not only emerges from the connections between neurons but that individual neurons shoulder more computational responsibility than previously anticipated.
Ranked #3 on Atari Games on Atari 2600 Skiing (using extra training data)
no code implementations • 3 Sep 2020 • Martin Mundt, Yongwon Hong, Iuliia Pliushch, Visvanathan Ramesh
In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era.
no code implementations • 26 Aug 2019 • Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Visvanathan Ramesh
We present an analysis of predictive uncertainty based out-of-distribution detection for different approaches to estimate various models' epistemic uncertainty and contrast it with extreme value theory based open set recognition.
3 code implementations • 28 May 2019 • Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Yongwon Hong, Visvanathan Ramesh
Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge.
2 code implementations • CVPR 2019 • Martin Mundt, Sagnik Majumder, Sreenivas Murali, Panagiotis Panetsos, Visvanathan Ramesh
Recognition of defects in concrete infrastructure, especially in bridges, is a costly and time consuming crucial first step in the assessment of the structural integrity.
1 code implementation • 14 Dec 2018 • Martin Mundt, Sagnik Majumder, Tobias Weis, Visvanathan Ramesh
We characterize convolutional neural networks with respect to the relative amount of features per layer.
no code implementations • ICLR 2018 • Martin Mundt, Tobias Weis, Kishore Konda, Visvanathan Ramesh
Successful training of convolutional neural networks is often associated with suffi- ciently deep architectures composed of high amounts of features.
no code implementations • 18 May 2017 • Martin Mundt, Tobias Weis, Kishore Konda, Visvanathan Ramesh
Successful training of convolutional neural networks is often associated with sufficiently deep architectures composed of high amounts of features.