no code implementations • EMNLP (spnlp) 2020 • Anh Duong Trinh, Robert J. Ross, John D. Kelleher
Scaling up dialogue state tracking to multiple domains is challenging due to the growth in the number of variables being tracked.
no code implementations • NAACL (SIGMORPHON) 2022 • Patrick Cormac English, John D. Kelleher, Julie Carson-Berndsen
In recent years large transformer model architectures have become available which provide a novel means of generating high-quality vector representations of speech audio.
no code implementations • 4 Mar 2024 • Vasudevan Nedumpozhimana, John D. Kelleher
Transformer-based Neural Language Models achieve state-of-the-art performance on various natural language processing tasks.
no code implementations • 20 Feb 2024 • Ammar N. Abbas, Chidera W. Amazu, Joseph Mietkiewicz, Houda Briwa, Andres Alonzo Perez, Gabriele Baldissone, Micaela Demichela, Georgios G. Chasparis, John D. Kelleher, Maria Chiara Leva
These findings are particularly relevant when predicting the overall performance of the individual participant and their capacity to successfully handle a plant upset and the alarms connected to it using process and human-machine interaction logs in real-time.
no code implementations • 8 Feb 2024 • Jeffrey Sardina, John D. Kelleher, Declan O'Sullivan
Our experiments on the UMLS dataset show that a single TWIG neural network can predict the results of state-of-the-art ComplEx-N3 KGE model nearly exactly on across all hyperparameter configurations.
1 code implementation • 28 Oct 2023 • Ammar N. Abbas, Georgios C. Chasparis, John D. Kelleher
Deep reinforcement learning has been the pioneer for solving this problem without the need for relying on the physical model of complex systems by just interacting with it.
Ranked #1 on Decision Making on NASA C-MAPSS
1 code implementation • 15 Oct 2023 • Ammar N. Abbas, Georgios C. Chasparis, John D. Kelleher
Deep reinforcement learning has the potential to address these problems by learning optimal control policies through exploration in an environment.
no code implementations • 2 May 2023 • Anya Belz, Craig Thomson, Ehud Reiter, Gavin Abercrombie, Jose M. Alonso-Moral, Mohammad Arvan, Anouck Braggaar, Mark Cieliebak, Elizabeth Clark, Kees Van Deemter, Tanvi Dinkar, Ondřej Dušek, Steffen Eger, Qixiang Fang, Mingqi Gao, Albert Gatt, Dimitra Gkatzia, Javier González-Corbelle, Dirk Hovy, Manuela Hürlimann, Takumi Ito, John D. Kelleher, Filip Klubicka, Emiel Krahmer, Huiyuan Lai, Chris van der Lee, Yiru Li, Saad Mahamood, Margot Mieskes, Emiel van Miltenburg, Pablo Mosteiro, Malvina Nissim, Natalie Parde, Ondřej Plátek, Verena Rieser, Jie Ruan, Joel Tetreault, Antonio Toral, Xiaojun Wan, Leo Wanner, Lewis Watson, Diyi Yang
We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible.
no code implementations • 27 Apr 2023 • Filip Klubička, Vasudevan Nedumpozhimana, John D. Kelleher
The goal of this paper is to learn more about how idiomatic information is structurally encoded in embeddings, using a structural probing method.
1 code implementation • 30 Jan 2023 • Yasmin Moslem, Rejwanul Haque, John D. Kelleher, Andy Way
By feeding an LLM at inference time with a prompt that consists of a list of translation pairs, it can then simulate the domain and style characteristics.
no code implementations • 25 Jan 2023 • Filip Klubička, John D. Kelleher
Modelling taxonomic and thematic relatedness is important for building AI with comprehensive natural language understanding.
1 code implementation • 21 Oct 2022 • Filip Klubička, John D. Kelleher
Improving our understanding of how information is encoded in vector space can yield valuable interpretability insights.
1 code implementation • AMTA 2022 • Yasmin Moslem, Rejwanul Haque, John D. Kelleher, Andy Way
Preservation of domain knowledge from the source to target is crucial in any translation workflow.
no code implementations • 27 Jun 2022 • Ammar N. Abbas, Georgios Chasparis, John D. Kelleher
An open research question in deep reinforcement learning is how to focus the policy learning of key decisions within a sparse domain.
no code implementations • 6 Jun 2022 • Na Li, John D. Kelleher, Robert Ross
To this end, in this paper, we present our initial research centred on a user-avatar dialogue scenario that we have developed to study the manifestation of confusion and in the long term its mitigation.
no code implementations • 8 Dec 2020 • Abhijit Mahalunkar, John D. Kelleher
We present an approach to design the grid searches for hyper-parameter optimization for recurrent neural architectures.
1 code implementation • COLING 2020 • Annika Lindh, Robert J. Ross, John D. Kelleher
A vital component of the Controllable Image Captioning architecture is the mechanism that decides the timing of attending to each region through the advancement of a region pointer.
no code implementations • 14 Feb 2020 • Magdalena Kacmajor, John D. Kelleher, Filip Klubicka, Alfredo Maldonado
This paper connects a series of papers dealing with taxonomic word embeddings.
no code implementations • RANLP 2019 • Fei Wang, Robert J. Ross, John D. Kelleher
Using these metrics we rank different background corpora relative to a target corpus.
no code implementations • WS 2019 • Anh Duong Trinh, Robert J. Ross, John D. Kelleher
In this paper we argue that treating the prediction of each slot value as an independent prediction task may ignore important associations between the slot values, and, consequently, we argue that treating dialogue state tracking as a structured prediction problem can help to improve dialogue state tracking performance.
no code implementations • WS 2019 • Abhijit Mahalunkar, John D. Kelleher
In order to successfully model Long Distance Dependencies (LDDs) it is necessary to understand the full-range of the characteristics of the LDDs exhibited in a target dataset.
no code implementations • 22 Dec 2018 • Murhaf Hossari, Soumyabrata Dev, John D. Kelleher
This tool is used to automatically detect the existence of new technologies and tools in text, and extract terms used to describe these new technologies.
1 code implementation • 19 Dec 2018 • Annika Lindh, Robert J. Ross, Abhijit Mahalunkar, Giancarlo Salton, John D. Kelleher
Image Captioning is a task that requires models to acquire a multi-modal understanding of the world and to express this understanding in natural language text.
no code implementations • 10 Oct 2018 • Giancarlo D. Salton, John D. Kelleher
Language Models (LMs) are important components in several Natural Language Processing systems.
no code implementations • 10 Oct 2018 • Giancarlo D. Salton, Robert J. Ross, John D. Kelleher
Idioms pose problems to almost all Machine Translation systems.
no code implementations • 6 Oct 2018 • Abhijit Mahalunkar, John D. Kelleher
In this paper, we presentdetailed analysis of the dependency decay curve exhibited by various datasets.
no code implementations • 15 Aug 2018 • Abhijit Mahalunkar, John D. Kelleher
However, one of the drawbacks of existing datasets is the lack of experimental control with regards to the presence and/or degree of LDDs.
no code implementations • 21 Jul 2018 • John D. Kelleher, Simon Dobnik
This paper examines to what degree current deep learning architectures for image caption generation capture spatial language.
no code implementations • 21 Jul 2018 • Simon Dobnik, John D. Kelleher
Natural language processing (NLP) can be done using either top-down (theory driven) and bottom-up (data driven) approaches, which we call mechanistic and phenomenological respectively.
no code implementations • LREC 2018 • Filip Klubička, Giancarlo D. Salton, John D. Kelleher
Creating a linguistic resource is often done by using a machine learning model that filters the content that goes through to a human annotator, before going into the final resource.