Search Results for author: Lama Alkhaled

Found 10 papers, 5 papers with code

Data Bias According to Bipol: Men are Naturally Right and It is the Role of Women to Follow Their Lead

1 code implementation7 Apr 2024 Irene Pagliai, Goya van Boven, Tosin Adewumi, Lama Alkhaled, Namrata Gurung, Isabella Södergren, Elisa Barney

We introduce new large labeled datasets on bias in 3 languages and show in experiments that bias exists in all 10 datasets of 5 languages evaluated, including benchmark datasets on the English GLUE/SuperGLUE leaderboards.

On the Limitations of Large Language Models (LLMs): False Attribution

no code implementations6 Apr 2024 Tosin Adewumi, Nudrat Habib, Lama Alkhaled, Elisa Barney

We then randomly sampled 162 chunks for human evaluation from each of the annotated books, based on the error margin of 7% and a confidence level of 95% for the book with the most chunks (Great Expectations by Charles Dickens, having 922 chunks).

Author Attribution Hallucination

Vehicle Detection Performance in Nordic Region

no code implementations22 Mar 2024 Hamam Mokayed, Rajkumar Saini, Oluwatosin Adewumi, Lama Alkhaled, Bjorn Backe, Palaiahnakote Shivakumara, Olle Hagner, Yan Chai Hum

This paper addresses the critical challenge of vehicle detection in the harsh winter conditions in the Nordic regions, characterized by heavy snowfall, reduced visibility, and low lighting.

Data Augmentation Transfer Learning

Instruction Makes a Difference

1 code implementation1 Feb 2024 Tosin Adewumi, Nudrat Habib, Lama Alkhaled, Elisa Barney

We introduce Instruction Document Visual Question Answering (iDocVQA) dataset and Large Language Document (LLaDoc) model, for training Language-Vision (LV) models for document analysis and predictions on document images, respectively.

Hallucination Instruction Following +2

ProCoT: Stimulating Critical Thinking and Writing of Students through Engagement with Large Language Models (LLMs)

no code implementations15 Dec 2023 Tosin Adewumi, Lama Alkhaled, Claudia Buck, Sergio Hernandez, Saga Brilioth, Mkpe Kekung, Yelvin Ragimov, Elisa Barney

The results show two things: (1) ProCoT stimulates creative/critical thinking and writing of students through engagement with LLMs when we compare the LLM solely output to ProCoT output and (2) ProCoT can prevent cheating because of clear limitations in existing LLMs when we compare students ProCoT output to LLM ProCoT output.

Active Learning Language Modelling +1

Robust and Fast Vehicle Detection using Augmented Confidence Map

no code implementations27 Apr 2023 Hamam Mokayed, Palaiahnakote Shivakumara, Lama Alkhaled, Rajkumar Saini, Muhammad Zeshan Afzal, Yan Chai Hum, Marcus Liwicki

Vehicle detection in real-time scenarios is challenging because of the time constraints and the presence of multiple types of vehicles with different speeds, shapes, structures, etc.

Fast Vehicle Detection

Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets

2 code implementations28 Jan 2023 Tosin Adewumi, Isabella Södergren, Lama Alkhaled, Sana Sabah Sabry, Foteini Liwicki, Marcus Liwicki

Hence, we also contribute a new, large Swedish bias-labelled dataset (of 2 million samples), translated from the English version and train the SotA mT5 model on it.

Bias Detection Natural Language Inference +1

ML_LTU at SemEval-2022 Task 4: T5 Towards Identifying Patronizing and Condescending Language

no code implementations SemEval (NAACL) 2022 Tosin Adewumi, Lama Alkhaled, Hamam Mokayed, Foteini Liwicki, Marcus Liwicki

This paper describes the system used by the Machine Learning Group of LTU in subtask 1 of the SemEval-2022 Task 4: Patronizing and Condescending Language (PCL) Detection.

Cannot find the paper you are looking for? You can Submit a new open access paper.