1 code implementation • 24 Jan 2024 • Mike Laszkiewicz, Imant Daunhawer, Julia E. Vogt, Asja Fischer, Johannes Lederer
Recent years have witnessed a rapid development of deep generative models for creating synthetic media, such as images and videos.
no code implementations • 13 Nov 2023 • Ali Mohaddes, Johannes Lederer
The notion of group invariance helps neural networks in recognizing patterns and features under geometric transformations.
no code implementations • 22 Jun 2023 • Mike Laszkiewicz, Denis Lukovnikov, Johannes Lederer, Asja Fischer
In this work, we propose a set-membership inference attack for generative models using deep image watermarking techniques.
no code implementations • 26 May 2023 • Mike Laszkiewicz, Jonas Ricker, Johannes Lederer, Asja Fischer
Recent breakthroughs in generative modeling have sparked interest in practical single-model attribution.
no code implementations • 3 Mar 2023 • Somnath Chakraborty, Johannes Lederer, Rainer von Sachs
We prove that the estimated process is stable, and we establish rates for the forecasting error that can outmatch the known rate in our setting.
1 code implementation • 22 Feb 2023 • Ayla Jungbluth, Johannes Lederer
Many methods for time-series forecasting are known in classical statistics, such as autoregression, moving averages, and exponential smoothing.
no code implementations • 11 Dec 2022 • Johannes Lederer
Neural networks are becoming increasingly popular in applications, but our mathematical understanding of their potential and limitations is still limited.
1 code implementation • 21 Jun 2022 • Mike Laszkiewicz, Johannes Lederer, Asja Fischer
Learning the tail behavior of a distribution is a notoriously difficult problem.
no code implementations • 9 May 2022 • Mahsa Taheri, Fang Xie, Johannes Lederer
Since statistical guarantees for neural networks are usually restricted to global optima of intricate objective functions, it is not clear whether these theories really explain the performances of actual outputs of neural-network pipelines.
no code implementations • 2 Feb 2022 • Rebecca Marion, Johannes Lederer, Bernadette Govaerts, Rainer von Sachs
Sparse linear prediction methods suffer from decreased prediction accuracy when the predictor variables have cluster structure (e. g. there are highly correlated groups of variables).
2 code implementations • 13 Jan 2022 • Yannick Düren, Johannes Lederer, Li-Xuan Qin
To address this problem, we developed "DANA" - an approach for assessing the performance of normalization methods for microRNA sequencing data based on biology-motivated and data-driven metrics.
1 code implementation • ICML Workshop INNF 2021 • Mike Laszkiewicz, Johannes Lederer, Asja Fischer
Normalizing flows, which learn a distribution by transforming the data to samples from a Gaussian base distribution, have proven powerful density approximations.
no code implementations • 4 Jun 2021 • Leni Ven, Johannes Lederer
Deep learning requires several design choices, such as the nodes' activation functions and the widths, types, and arrangements of the layers.
no code implementations • 28 May 2021 • Shih-Ting Huang, Johannes Lederer
In contrast, the arguably much more common case of corruption that reflects the limited quality of data has been studied much less.
no code implementations • 28 May 2021 • Shih-Ting Huang, Johannes Lederer
In this paper, we introduce a framework for targeted deep learning, and we devise and test an approach for adapting standard pipelines to the requirements of targeted deep learning.
no code implementations • 25 Jan 2021 • Johannes Lederer
Activation functions shape the outputs of artificial neurons and, therefore, are integral parts of neural networks in general and deep learning in particular.
no code implementations • 1 Jan 2021 • Johannes Lederer
Neural networks are becoming increasingly popular in applications, but a comprehensive mathematical understanding of their potentials and limitations is still missing.
no code implementations • 2 Oct 2020 • Johannes Lederer
We analyze the optimization landscapes of deep learning with wide networks.
no code implementations • 28 Sep 2020 • Johannes Lederer
Empirical studies suggest that wide neural networks are comparably easy to optimize, but mathematical support for this observation is scarce.
no code implementations • 14 Sep 2020 • Johannes Lederer
It has been observed that certain loss functions can render deep-learning pipelines robust against flaws in the data.
no code implementations • 13 Sep 2020 • Sarah Friedrich, Gerd Antes, Sigrid Behr, Harald Binder, Werner Brannath, Florian Dumpert, Katja Ickstadt, Hans Kestler, Johannes Lederer, Heinz Leitgöb, Markus Pauly, Ansgar Steland, Adalbert Wilhelm, Tim Friede
The research on and application of artificial intelligence (AI) has triggered a comprehensive scientific, economic, social and political discussion.
no code implementations • 28 Jun 2020 • Mohamed Hebiri, Johannes Lederer
Sparsity has become popular in machine learning, because it can save computational resources, facilitate interpretations, and prevent overfitting.
no code implementations • 30 May 2020 • Mahsa Taheri, Fang Xie, Johannes Lederer
Neural networks have become standard tools in the analysis of data, but they lack comprehensive mathematical theories.
1 code implementation • 1 May 2020 • Mike Laszkiewicz, Asja Fischer, Johannes Lederer
Many Machine Learning algorithms are formulated as regularized optimization problems, but their performance hinges on a regularization parameter that needs to be calibrated to each application at hand.
1 code implementation • 27 Feb 2020 • Shih-Ting Huang, Fang Xie, Johannes Lederer
Ridge estimators regularize the squared Euclidean lengths of parameters.
1 code implementation • 23 Sep 2019 • Shih-Ting Huang, Yannick Düren, Kristoffer H. Hellton, Johannes Lederer
Personalized medicine has become an important part of medicine, for instance predicting individual drug responses based on genomic information.
1 code implementation • 8 Jul 2019 • Lu Yu, Tobias Kaufmann, Johannes Lederer
The increasing availability of data has generated unprecedented prospects for network analyses in many biological fields, such as neuroscience (e. g., brain networks), genomics (e. g., gene-gene interaction networks), and ecology (e. g., species interaction networks).
Methodology Quantitative Methods Applications
1 code implementation • 8 Jul 2019 • Fang Xie, Johannes Lederer
We support our method both in theory and simulations, and we show that it can lead to new discoveries on microbiome data from the American Gut Project.
Methodology Quantitative Methods Applications
no code implementations • 9 Oct 2017 • Rui Zhuang, Johannes Lederer
Maximum regularized likelihood estimators (MRLEs) are arguably the most established class of estimators in high-dimensional statistics.
1 code implementation • 10 Apr 2017 • Yunqi Bu, Johannes Lederer
In applications of graphical models, we typically have more information than just the samples themselves.
no code implementations • 1 Oct 2016 • Wei Li, Johannes Lederer
Feature selection is a standard approach to understanding and modeling high-dimensional classification data, but the corresponding statistical methods hinge on tuning parameters that are difficult to calibrate.
no code implementations • 23 Sep 2016 • Mahsa Taheri, Néhémy Lim, Johannes Lederer
Modern technologies are generating ever-increasing amounts of data.
no code implementations • 1 Aug 2016 • Johannes Lederer, Lu Yu, Irina Gaynanova
The abundance of high-dimensional data in the modern sciences has generated tremendous interest in penalized estimators such as the lasso, scaled lasso, square-root lasso, elastic net, and many others.
1 code implementation • 22 Apr 2016 • Jacob Bien, Irina Gaynanova, Johannes Lederer, Christian Müller
The TREX is a recently introduced method for performing sparse high-dimensional regression.
no code implementations • 27 Oct 2014 • Johannes Lederer, Christian Müller
We introduce Graphical TREX (GTREX), a novel method for graph estimation in high-dimensional Gaussian graphical models.
no code implementations • 16 Sep 2014 • Johannes Lederer, Sergio Guadarrama
Sparse Filtering is a popular feature learning algorithm for image classification pipelines.
no code implementations • 2 Apr 2014 • Johannes Lederer, Christian Müller
However, Square-Root Lasso still requires the calibration of a tuning parameter to all other aspects of the model.
no code implementations • 7 Feb 2014 • Arnak S. Dalalyan, Mohamed Hebiri, Johannes Lederer
Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood.