1 code implementation • LTEDI (ACL) 2022 • Marion Bartl, Susan Leavy
This paper presents a new method for automatic detection of gendered terms in large-scale language datasets.
1 code implementation • 15 Sep 2023 • Abhishek Mandal, Susan Leavy, Suzanne Little
We examine bias amplification when models belonging to these two architectures are used as a part of large multimodal models, evaluating the different image encoders of Contrastive Language Image Pretraining which is an important model used in many generative models such as DALL-E and Stable Diffusion.
no code implementations • 2 Aug 2023 • Susan Leavy, Emilie Pine, Mark T Keane
We present a text mining system to support the exploration of large volumes of text detailing the findings of government inquiries.
no code implementations • 13 Jun 2023 • Susan Leavy, Gerardine Meaney, Karen Wade, Derek Greene
The increasing availability of digital collections of historical and contemporary literature presents a wealth of possibilities for new research in the humanities.
no code implementations • 26 Apr 2023 • Abhishek Mandal, Susan Leavy, Suzanne Little
In this paper, we propose Multimodal Composite Association Score (MCAS) as a new method of measuring gender bias in multimodal generative models.
no code implementations • 13 Sep 2022 • Susan Leavy
Recommender systems are becoming increasingly central as mediators of information with the potential to profoundly influence societal opinion.
1 code implementation • 28 Jun 2022 • Marion Bartl, Susan Leavy
and nouns with lexical gender ('mother', 'boyfriend', 'policewoman', etc.).
no code implementations • 15 May 2020 • Susan Leavy
This paper presents research uncovering systematic gender bias in the representation of political leaders in the media, using artificial intelligence.
no code implementations • 14 May 2020 • Susan Leavy, Gerardine Meaney, Karen Wade, Derek Greene
Artificial Intelligence has the capacity to amplify and perpetuate societal biases and presents profound ethical implications for society.