no code implementations • RepL4NLP (ACL) 2022 • Andreas Stephan, Benjamin Roth
In this work, we explore a novel direction of generative modeling for weak supervision”:" Instead of modeling the output of the annotation process (the labeling function matches), we generatively model the input-side data distributions (the feature space) covered by labeling functions.
1 code implementation • 11 Mar 2024 • Lena Zellinger, Andreas Stephan, Benjamin Roth
We further observe that KGEs adapted with COULDD solidly detect plausible counterfactual changes to the graph that follow these patterns.
1 code implementation • 5 Feb 2024 • Andreas Stephan, Lukas Miklautz, Kevin Sidak, Jan Philip Wahle, Bela Gipp, Claudia Plant, Benjamin Roth
We, therefore, propose Text-Guided Image Clustering, i. e., generating text using image captioning and visual question-answering (VQA) models and subsequently clustering the generated text.
1 code implementation • 27 May 2023 • Dawei Zhu, Xiaoyu Shen, Marius Mosbach, Andreas Stephan, Dietrich Klakow
In this paper, we revisit the setup of these approaches and find that the benefits brought by these approaches are significantly overestimated.
1 code implementation • 25 Oct 2022 • Andreas Stephan, Vasiliki Kougia, Benjamin Roth
In this work, we provide a method for learning from weak labels by separating two types of complementary information associated with the labeling functions: information related to the target label and information specific to one labeling function only.
2 code implementations • 28 Apr 2022 • Andreas Stephan, Benjamin Roth
In this work, we explore a novel direction of generative modeling for weak supervision: Instead of modeling the output of the annotation process (the labeling function matches), we generatively model the input-side data distributions (the feature space) covered by labeling functions.
1 code implementation • ACL (RepL4NLP) 2021 • Anastasiia Sedova, Andreas Stephan, Marina Speranskaya, Benjamin Roth
Strategies for improving the training and prediction quality of weakly supervised machine learning models vary in how much they are tailored to a specific task or integrated with a specific model architecture.
no code implementations • 21 Dec 2019 • Maziar Sahamkhadam, Andreas Stephan
Out-of-sample portfolio back-testing shows that vine copulas reduce portfolio risk better than simple copulas.