no code implementations • 7 Mar 2024 • Jan Schuchardt, Mihail Stoian, Arthur Kosmala, Stephan Günnemann
Differential privacy (DP) has various desirable properties, such as robustness to post-processing, group privacy, and amplification by subsampling, which can be derived independently of each other.
no code implementations • NeurIPS 2023 • Jan Schuchardt, Yan Scholten, Stephan Günnemann
For the first time, we propose a sound notion of adversarial robustness that accounts for task equivariance.
no code implementations • NeurIPS 2023 • Yan Scholten, Jan Schuchardt, Aleksandar Bojchevski, Stephan Günnemann
Randomized smoothing is a powerful framework for making models provably robust against small changes to their inputs - by guaranteeing robustness of the majority vote when randomly adding noise before classification.
no code implementations • 6 Feb 2023 • Jan Schuchardt, Aleksandar Bojchevski, Johannes Gasteiger, Stephan Günnemann
In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions (a vector of labels) based on a single input, i. e. a single graph, image, or document respectively.
1 code implementation • 5 Jan 2023 • Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann
To remedy this, we propose novel gray-box certificates that exploit the message-passing principle of GNNs: We randomly intercept messages and carefully analyze the probability that messages from adversarially controlled nodes reach their target nodes.
no code implementations • 2 Jan 2023 • Morgane Ayle, Jan Schuchardt, Lukas Gosch, Daniel Zügner, Stephan Günnemann
We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph.
no code implementations • 25 Nov 2022 • Jan Schuchardt, Stephan Günnemann
Building models that comply with the invariances inherent to different domains, such as invariance under translation or rotation, is a key aspect of applying machine learning to real world problems like molecular property prediction, medical imaging, protein folding or LiDAR classification.
no code implementations • 28 Oct 2022 • Jan Schuchardt, Tom Wollschläger, Aleksandar Bojchevski, Stephan Günnemann
We further show that this approach is beneficial for the larger class of softly local models, where each output is dependent on the entire input but assigns different levels of importance to different input regions (e. g. based on their proximity in the image).
no code implementations • ICLR 2022 • Simon Geisler, Johanna Sommer, Jan Schuchardt, Aleksandar Bojchevski, Stephan Günnemann
Specifically, most datasets only capture a simpler subproblem and likely suffer from spurious features.
no code implementations • ICLR 2021 • Jan Schuchardt, Aleksandar Bojchevski, Johannes Klicpera, Stephan Günnemann
In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions (a vector of labels) based on a single input, i. e. a single graph, image, or document respectively.
1 code implementation • 8 May 2019 • Jan Schuchardt, Vladimir Golkov, Daniel Cremers
Here we show that learning to evolve, i. e. learning to mutate and recombine better than at random, improves the result of evolution in terms of fitness increase per generation and even in terms of attainable fitness.