Search Results for author: Lukas Miklautz

Found 4 papers, 3 papers with code

MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Representations

1 code implementation15 Feb 2024 Benedikt Alkin, Lukas Miklautz, Sepp Hochreiter, Johannes Brandstetter

The motivation behind MIM-Refiner is rooted in the insight that optimal representations within MIM models generally reside in intermediate layers.

Contrastive Learning Image Clustering +1

Text-Guided Image Clustering

1 code implementation5 Feb 2024 Andreas Stephan, Lukas Miklautz, Kevin Sidak, Jan Philip Wahle, Bela Gipp, Claudia Plant, Benjamin Roth

We, therefore, propose Text-Guided Image Clustering, i. e., generating text using image captioning and visual question-answering (VQA) models and subsequently clustering the generated text.

Clustering Image Captioning +3

Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget

1 code implementation20 Apr 2023 Johannes Lehner, Benedikt Alkin, Andreas Fürst, Elisabeth Rumetshofer, Lukas Miklautz, Sepp Hochreiter

In this work, we study how to combine the efficiency and scalability of MIM with the ability of ID to perform downstream classification in the absence of large amounts of labeled data.

 Ranked #1 on Image Clustering on Imagenet-dog-15 (using extra training data)

Clustering Contrastive Learning +2

Deep Clustering With Consensus Representations

no code implementations13 Oct 2022 Lukas Miklautz, Martin Teuffenbach, Pascal Weber, Rona Perjuci, Walid Durani, Christian Böhm, Claudia Plant

Further, we propose DECCS (Deep Embedded Clustering with Consensus representationS), a deep consensus clustering method that learns a consensus representation by enhancing the embedded space to such a degree that all ensemble members agree on a common clustering result.

Clustering Clustering Ensemble +1

Cannot find the paper you are looking for? You can Submit a new open access paper.