Search Results for author: Branislav Pecher

Found 9 papers, 5 papers with code

Comparing Specialised Small and General Large Language Models on Text Classification: 100 Labelled Samples to Achieve Break-Even Performance

no code implementations20 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova

When performance variance is taken into consideration, the number of required labels increases on average by $100 - 200\%$ and even up to $1500\%$ in specific cases.

In-Context Learning Language Modelling +3

On Sensitivity of Learning with Limited Labelled Data to the Effects of Randomness: Impact of Interactions and Systematic Choices

no code implementations20 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova

To measure the true effects of an individual randomness factor, our method mitigates the effects of other factors and observes how the performance varies across multiple runs.

In-Context Learning Meta-Learning +2

Automatic Combination of Sample Selection Strategies for Few-Shot Learning

no code implementations5 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova, Joaquin Vanschoren

In few-shot learning, such as meta-learning, few-shot fine-tuning or in-context learning, the limited number of samples used to train a model have a significant impact on the overall success.

Few-Shot Learning In-Context Learning

Effects of diversity incentives on sample diversity and downstream model performance in LLM-based text augmentation

1 code implementation12 Jan 2024 Jan Cegin, Branislav Pecher, Jakub Simko, Ivan Srba, Maria Bielikova, Peter Brusilovsky

The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models.

Text Augmentation

KInITVeraAI at SemEval-2023 Task 3: Simple yet Powerful Multilingual Fine-Tuning for Persuasion Techniques Detection

1 code implementation24 Apr 2023 Timo Hromadka, Timotej Smolen, Tomas Remis, Branislav Pecher, Ivan Srba

This paper presents the best-performing solution to the SemEval 2023 Task 3 on the subtask 3 dedicated to persuasion techniques detection.

Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles

1 code implementation18 Oct 2022 Ivan Srba, Robert Moro, Matus Tomlein, Branislav Pecher, Jakub Simko, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Adrian Gavornik, Maria Bielikova

We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations.

Misinformation

An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting and Recent Behavior Changes

1 code implementation25 Mar 2022 Matus Tomlein, Branislav Pecher, Jakub Simko, Ivan Srba, Robert Moro, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Maria Bielikova

We present a study in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content (for various topics).

Misinformation

Cannot find the paper you are looking for? You can Submit a new open access paper.