no code implementations • 29 Sep 2023 • Maximilian Schambach, Dominique Paul, Johannes S. Otterbach
To analyze the scaling potential of deep tabular representation learning models, we introduce a novel Transformer-based architecture specifically tailored to tabular data and cross-table representation learning by utilizing table-specific tokenizers and a shared Transformer backbone.
1 code implementation • NeurIPS 2023 • Julien Siems, Konstantin Ditschuneit, Winfried Ripken, Alma Lindborg, Maximilian Schambach, Johannes S. Otterbach, Martin Genzel
Generalized Additive Models (GAMs) have recently experienced a resurgence in popularity due to their interpretability, which arises from expressing the target value as a sum of non-linear transformations of the features.
1 code implementation • 14 Apr 2023 • Alexander Koenig, Maximilian Schambach, Johannes Otterbach
The STEGO method for unsupervised semantic segmentation contrastively distills feature correspondences of a DINO-pre-trained Vision Transformer and recently set a new state of the art.
no code implementations • 18 Mar 2023 • Julien Siems, Maximilian Schambach, Sebastian Schulze, Johannes S. Otterbach
In this work, we focus on developing dynamic inventory ordering policies for a multi-echelon, i. e. multi-stage, supply chain.
1 code implementation • 18 Mar 2021 • Maximilian Schambach, Jiayang Shi, Michael Heizmann
In this application, the spectrally coded light field camera can be interpreted as a single-shot spectral depth camera.
no code implementations • 31 Dec 2019 • Maximilian Schambach, Fernando Puente León
To quantify the performance of the algorithms, we propose an evaluation pipeline utilizing application-specific ray-traced white images with known microlens positions.