Search Results for author: Maria Bielikova

Found 23 papers, 10 papers with code

On Sensitivity of Learning with Limited Labelled Data to the Effects of Randomness: Impact of Interactions and Systematic Choices

no code implementations20 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova

To measure the true effects of an individual randomness factor, our method mitigates the effects of other factors and observes how the performance varies across multiple runs.

In-Context Learning Meta-Learning +2

Fine-Tuning, Prompting, In-Context Learning and Instruction-Tuning: How Many Labelled Samples Do We Need?

no code implementations20 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova

When solving a task with limited labelled data, researchers can either use a general large language model without further update, or use the few examples to tune a specialised smaller model.

In-Context Learning Language Modelling +1

Automatic Combination of Sample Selection Strategies for Few-Shot Learning

no code implementations5 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova, Joaquin Vanschoren

In few-shot learning, such as meta-learning, few-shot fine-tuning or in-context learning, the limited number of samples used to train a model have a significant impact on the overall success.

Few-Shot Learning In-Context Learning

Effects of diversity incentives on sample diversity and downstream model performance in LLM-based text augmentation

1 code implementation12 Jan 2024 Jan Cegin, Branislav Pecher, Jakub Simko, Ivan Srba, Maria Bielikova, Peter Brusilovsky

The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models.

Text Augmentation

Disinformation Capabilities of Large Language Models

1 code implementation15 Nov 2023 Ivan Vykopal, Matúš Pikuliak, Ivan Srba, Robert Moro, Dominik Macko, Maria Bielikova

Automated disinformation generation is often listed as an important risk associated with large language models (LLMs).

MULTITuDE: Large-Scale Multilingual Machine-Generated Text Detection Benchmark

1 code implementation20 Oct 2023 Dominik Macko, Robert Moro, Adaku Uchendu, Jason Samuel Lucas, Michiharu Yamashita, Matúš Pikuliak, Ivan Srba, Thai Le, Dongwon Lee, Jakub Simko, Maria Bielikova

There is a lack of research into capabilities of recent LLMs to generate convincing text in languages other than English and into performance of detectors of machine-generated text in multilingual settings.

Benchmarking Text Detection

FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

no code implementations11 Aug 2023 Karim Lekadir, Aasa Feragen, Abdul Joseph Fofanah, Alejandro F Frangi, Alena Buyx, Anais Emelie, Andrea Lara, Antonio R Porras, An-Wen Chan, Arcadi Navarro, Ben Glocker, Benard O Botwe, Bishesh Khanal, Brigit Beger, Carol C Wu, Celia Cintas, Curtis P Langlotz, Daniel Rueckert, Deogratias Mzurikwao, Dimitrios I Fotiadis, Doszhan Zhussupov, Enzo Ferrante, Erik Meijering, Eva Weicken, Fabio A González, Folkert W Asselbergs, Fred Prior, Gabriel P Krestin, Gary Collins, Geletaw S Tegenaw, Georgios Kaissis, Gianluca Misuraca, Gianna Tsakou, Girish Dwivedi, Haridimos Kondylakis, Harsha Jayakody, Henry C Woodruf, Hugo JWL Aerts, Ian Walsh, Ioanna Chouvarda, Irène Buvat, Islem Rekik, James Duncan, Jayashree Kalpathy-Cramer, Jihad Zahir, Jinah Park, John Mongan, Judy W Gichoya, Julia A Schnabel, Kaisar Kushibar, Katrine Riklund, Kensaku MORI, Kostas Marias, Lameck M Amugongo, Lauren A Fromont, Lena Maier-Hein, Leonor Cerdá Alberich, Leticia Rittner, Lighton Phiri, Linda Marrakchi-Kacem, Lluís Donoso-Bach, Luis Martí-Bonmatí, M Jorge Cardoso, Maciej Bobowicz, Mahsa Shabani, Manolis Tsiknakis, Maria A Zuluaga, Maria Bielikova, Marie-Christine Fritzsche, Marius George Linguraru, Markus Wenzel, Marleen de Bruijne, Martin G Tolsgaard, Marzyeh Ghassemi, Md Ashrafuzzaman, Melanie Goisauf, Mohammad Yaqub, Mohammed Ammar, Mónica Cano Abadía, Mukhtar M E Mahmoud, Mustafa Elattar, Nicola Rieke, Nikolaos Papanikolaou, Noussair Lazrak, Oliver Díaz, Olivier Salvado, Oriol Pujol, Ousmane Sall, Pamela Guevara, Peter Gordebeke, Philippe Lambin, Pieta Brown, Purang Abolmaesumi, Qi Dou, Qinghua Lu, Richard Osuala, Rose Nakasi, S Kevin Zhou, Sandy Napel, Sara Colantonio, Shadi Albarqouni, Smriti Joshi, Stacy Carter, Stefan Klein, Steffen E Petersen, Susanna Aussó, Suyash Awate, Tammy Riklin Raviv, Tessa Cook, Tinashe E M Mutsvangwa, Wendy A Rogers, Wiro J Niessen, Xènia Puig-Bosch, Yi Zeng, Yunusa G Mohammed, Yves Saint James Aquino, Zohaib Salahuddin, Martijn P A Starmans

This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.

Fairness

Eye Tracking as a Source of Implicit Feedback in Recommender Systems: A Preliminary Analysis

no code implementations12 May 2023 Santiago de Leon-Martinez, Robert Moro, Maria Bielikova

Eye tracking in recommender systems can provide an additional source of implicit feedback, while helping to evaluate other sources of feedback.

Collaborative Filtering Movie Recommendation +1

Searching for Discriminative Words in Multidimensional Continuous Feature Space

no code implementations26 Nov 2022 Marius Sajgalik, Michal Barla, Maria Bielikova

We demonstrate the effectiveness of our approach by achieving state-of-the-art results on text categorisation task using just a small number of extracted keywords.

Part-Of-Speech Tagging

Automated, not Automatic: Needs and Practices in European Fact-checking Organizations as a basis for Designing Human-centered AI Systems

no code implementations22 Nov 2022 Andrea Hrckova, Robert Moro, Ivan Srba, Jakub Simko, Maria Bielikova

Second, we have identified fact-checkers' needs and pains focusing on so far unexplored dimensions and emphasizing the needs of fact-checkers from Central and Eastern Europe as well as from low-resource language groups which have implications for development of new resources (datasets) as well as for the focus of AI research in this domain.

Fact Checking

Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles

1 code implementation18 Oct 2022 Ivan Srba, Robert Moro, Matus Tomlein, Branislav Pecher, Jakub Simko, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Adrian Gavornik, Maria Bielikova

We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations.

Misinformation

An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting and Recent Behavior Changes

1 code implementation25 Mar 2022 Matus Tomlein, Branislav Pecher, Jakub Simko, Ivan Srba, Robert Moro, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Maria Bielikova

We present a study in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content (for various topics).

Misinformation

Exploring Customer Price Preference and Product Profit Role in Recommender Systems

no code implementations13 Mar 2022 Michal Kompan, Peter Gaspar, Jakub Macina, Matus Cimerman, Maria Bielikova

We propose an adjustment of a predicted ranking for score-based recommender systems and explore the effect of the profit and customers' price preferences on two industry datasets from the fashion domain.

Recommendation Systems

A Study of Fake News Reading and Annotating in Social Media Context

no code implementations26 Sep 2021 Jakub Simko, Patrik Racsko, Matus Tomlein, Martin Hanakova, Robert Moro, Maria Bielikova

In this paper, we present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.

Misinformation

The Cold-start Problem: Minimal Users' Activity Estimation

no code implementations31 May 2021 Juraj Visnovsky, Ondrej Kassak, Michal Kompan, Maria Bielikova

Cold-start problem, which arises upon the new users arrival, is one of the fundamental problems in today's recommender approaches.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.