Search Results for author: Benedikt Alkin

Found 4 papers, 3 papers with code

Vision-LSTM: xLSTM as Generic Vision Backbone

no code implementations6 Jun 2024 Benedikt Alkin, Maximilian Beck, Korbinian Pöppel, Sepp Hochreiter, Johannes Brandstetter

Transformers are widely used as generic backbones in computer vision, despite initially introduced for natural language processing.

Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators

1 code implementation19 Feb 2024 Benedikt Alkin, Andreas Fürst, Simon Schmid, Lukas Gruber, Markus Holzleitner, Johannes Brandstetter

This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar.

Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget

1 code implementation20 Apr 2023 Johannes Lehner, Benedikt Alkin, Andreas Fürst, Elisabeth Rumetshofer, Lukas Miklautz, Sepp Hochreiter

In this work, we study how to combine the efficiency and scalability of MIM with the ability of ID to perform downstream classification in the absence of large amounts of labeled data.

 Ranked #1 on Image Clustering on Imagenet-dog-15 (using extra training data)

Clustering Contrastive Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.