Search Results for author: Ivan Srba

Found 17 papers, 9 papers with code

Comparing Specialised Small and General Large Language Models on Text Classification: 100 Labelled Samples to Achieve Break-Even Performance

no code implementations20 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova

When performance variance is taken into consideration, the number of required labels increases on average by $100 - 200\%$ and even up to $1500\%$ in specific cases.

In-Context Learning Language Modelling +1

On Sensitivity of Learning with Limited Labelled Data to the Effects of Randomness: Impact of Interactions and Systematic Choices

no code implementations20 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova

To measure the true effects of an individual randomness factor, our method mitigates the effects of other factors and observes how the performance varies across multiple runs.

In-Context Learning Meta-Learning +2

Automatic Combination of Sample Selection Strategies for Few-Shot Learning

no code implementations5 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova, Joaquin Vanschoren

In few-shot learning, such as meta-learning, few-shot fine-tuning or in-context learning, the limited number of samples used to train a model have a significant impact on the overall success.

Few-Shot Learning In-Context Learning

Effects of diversity incentives on sample diversity and downstream model performance in LLM-based text augmentation

1 code implementation12 Jan 2024 Jan Cegin, Branislav Pecher, Jakub Simko, Ivan Srba, Maria Bielikova, Peter Brusilovsky

The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models.

Text Augmentation

Disinformation Capabilities of Large Language Models

1 code implementation15 Nov 2023 Ivan Vykopal, Matúš Pikuliak, Ivan Srba, Robert Moro, Dominik Macko, Maria Bielikova

Automated disinformation generation is often listed as an important risk associated with large language models (LLMs).

A Ship of Theseus: Curious Cases of Paraphrasing in LLM-Generated Texts

no code implementations14 Nov 2023 Nafis Irtiza Tripto, Saranya Venkatraman, Dominik Macko, Robert Moro, Ivan Srba, Adaku Uchendu, Thai Le, Dongwon Lee

In the realm of text manipulation and linguistic transformation, the question of authorship has always been a subject of fascination and philosophical inquiry.

Is it indeed bigger better? The comprehensive study of claim detection LMs applied for disinformation tackling

no code implementations10 Nov 2023 Martin Hyben, Sebastian Kula, Ivan Srba, Robert Moro, Jakub Simko

This study compares the performance of (1) fine-tuned models and (2) extremely large language models on the task of check-worthy claim detection.

MULTITuDE: Large-Scale Multilingual Machine-Generated Text Detection Benchmark

1 code implementation20 Oct 2023 Dominik Macko, Robert Moro, Adaku Uchendu, Jason Samuel Lucas, Michiharu Yamashita, Matúš Pikuliak, Ivan Srba, Thai Le, Dongwon Lee, Jakub Simko, Maria Bielikova

There is a lack of research into capabilities of recent LLMs to generate convincing text in languages other than English and into performance of detectors of machine-generated text in multilingual settings.

Benchmarking Text Detection

KInITVeraAI at SemEval-2023 Task 3: Simple yet Powerful Multilingual Fine-Tuning for Persuasion Techniques Detection

1 code implementation24 Apr 2023 Timo Hromadka, Timotej Smolen, Tomas Remis, Branislav Pecher, Ivan Srba

This paper presents the best-performing solution to the SemEval 2023 Task 3 on the subtask 3 dedicated to persuasion techniques detection.

Automated, not Automatic: Needs and Practices in European Fact-checking Organizations as a basis for Designing Human-centered AI Systems

no code implementations22 Nov 2022 Andrea Hrckova, Robert Moro, Ivan Srba, Jakub Simko, Maria Bielikova

Second, we have identified fact-checkers' needs and pains focusing on so far unexplored dimensions and emphasizing the needs of fact-checkers from Central and Eastern Europe as well as from low-resource language groups which have implications for development of new resources (datasets) as well as for the focus of AI research in this domain.

Fact Checking

Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles

1 code implementation18 Oct 2022 Ivan Srba, Robert Moro, Matus Tomlein, Branislav Pecher, Jakub Simko, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Adrian Gavornik, Maria Bielikova

We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations.

Misinformation

An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting and Recent Behavior Changes

1 code implementation25 Mar 2022 Matus Tomlein, Branislav Pecher, Jakub Simko, Ivan Srba, Robert Moro, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Maria Bielikova

We present a study in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content (for various topics).

Misinformation

Cannot find the paper you are looking for? You can Submit a new open access paper.