no code implementations • ALTA 2020 • Aaron Keesing, Ian Watson, Michael Witbrock
We test four models proposed in the speech emotion recognition (SER) literature on 15 public and academic licensed datasets in speaker-independent cross-validation.
1 code implementation • NAACL (DLG4NLP) 2022 • Zhenyun Deng, Yonghua Zhu, Qianqian Qi, Michael Witbrock, Patricia Riddle
Current graph-neural-network-based (GNN-based) approaches to multi-hop questions integrate clues from scattered paragraphs in an entity graph, achieving implicit reasoning by synchronous update of graph node representations using information from neighbours; this is poorly suited for explaining how clues are passed through the graph in hops.
no code implementations • CMCL (ACL) 2022 • Joshua Bensemann, Alex Peng, Diana Prado, Yang Chen, Neset Tan, Paul Michael Corballis, Patricia Riddle, Michael Witbrock
Attention describes cognitive processes that are important to many human phenomena including reading.
no code implementations • 4 Feb 2024 • Gaël Gendron, Bao Trung Nguyen, Alex Yuxuan Peng, Michael Witbrock, Gillian Dobbie
We show that such causal constraints can improve out-of-distribution performance on abstract and causal reasoning tasks.
1 code implementation • 21 Dec 2023 • Gaël Gendron, Yang Chen, Mitchell Rogers, Yiping Liu, Mihailo Azhar, Shahrokh Heidari, David Arturo Soriano Valdez, Kobe Knowles, Padriac O'Leary, Simon Eyre, Michael Witbrock, Gillian Dobbie, Jiamou Liu, Patrice Delmas
Better understanding the natural world is a crucial task with a wide range of applications.
no code implementations • 21 Nov 2023 • Tim Hartill, Joshua Bensemann, Michael Witbrock, Patricia J. Riddle
We train two Language Models in a multitask fashion whereby the second model differs from the first only in that it has two additional datasets added to the training regime that are designed to impart simple numerical reasoning strategies of a sort known to improve performance on some of our evaluation datasets but not on others.
1 code implementation • 13 Oct 2023 • Qiming Bao, Gael Gendron, Alex Yuxuan Peng, Wanjun Zhong, Neset Tan, Yang Chen, Michael Witbrock, Jiamou Liu
Despite their high performance on the original publicly available datasets, we find that all models perform poorly on these newly constructed datasets.
1 code implementation • 19 Sep 2023 • Qiming Bao, Juho Leinonen, Alex Yuxuan Peng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock, Jiamou Liu
When learnersourcing multiple-choice questions, creating explanations for the solution of a question is a crucial step; it helps other students understand the solution and promotes a deeper understanding of related concepts.
no code implementations • 9 Aug 2023 • Tim Hartill, Diana Benavides-Prado, Michael Witbrock, Patricia J. Riddle
When provided with sufficient explanatory context, smaller Language Models have been shown to exhibit strong reasoning ability on challenging short-answer question-answering tasks where the questions are unseen in training.
1 code implementation • 2 Aug 2023 • Tim Hartill, Neset Tan, Michael Witbrock, Patricia J. Riddle
We equip a smaller Language Model to generalise to answering challenging compositional questions that have not been seen in training.
1 code implementation • 20 Jun 2023 • Mitchell Rogers, Gaël Gendron, David Arturo Soriano Valdez, Mihailo Azhar, Yang Chen, Shahrokh Heidari, Caleb Perelini, Padriac O'Leary, Kobe Knowles, Izak Tait, Simon Eyre, Michael Witbrock, Patrice Delmas
Recording animal behaviour is an important step in evaluating the well-being of animals and further understanding the natural world.
1 code implementation • 31 May 2023 • Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie
We perform extensive evaluations of state-of-the-art LLMs, showing that they currently achieve very limited performance in contrast with other natural language tasks, even when applying techniques that have been shown to improve performance on other NLP tasks.
1 code implementation • 21 May 2023 • Qiming Bao, Alex Yuxuan Peng, Zhenyun Deng, Wanjun Zhong, Gael Gendron, Timothy Pistotti, Neset Tan, Nathan Young, Yang Chen, Yonghua Zhu, Paul Denny, Michael Witbrock, Jiamou Liu
Combining large language models with logical reasoning enhances their capacity to address problems in a robust and reliable manner.
1 code implementation • 5 May 2023 • Kobe Knowles, Joshua Bensemann, Diana Benavides-Prado, Vithya Yogarajan, Michael Witbrock, Gillian Dobbie, Yang Chen
We introduce a novel architecture, the Neuromodulation Gated Transformer (NGT), which is a simple implementation of neuromodulation in transformers via a multiplicative effect.
no code implementations • 14 Mar 2023 • Neşet Özkan Tan, Alex Yuxuan Peng, Joshua Bensemann, Qiming Bao, Tim Hartill, Mark Gahegan, Michael Witbrock
Because of the attention mechanism's high computational cost, transformer models usually have an input-length limitation caused by hardware constraints.
no code implementations • 16 Feb 2023 • Libo Zhang, Yang Chen, Toru Takisaka, Bakh Khoussainov, Michael Witbrock, Jiamou Liu
In real-world multi-agent systems, in addition to being in an equilibrium, agents' policies are often expected to meet requirements with respect to safety, and fairness.
1 code implementation • 2 Feb 2023 • Gaël Gendron, Michael Witbrock, Gillian Dobbie
Following this assumption, we introduce a new method for disentanglement inspired by causal dynamics that combines causality theory with vector-quantized variational autoencoders.
no code implementations • 1 Feb 2023 • Gaël Gendron, Michael Witbrock, Gillian Dobbie
Deep Learning models have shown success in a large variety of tasks by extracting correlation patterns from high-dimensional data but still struggle when generalizing out of their initial distribution.
no code implementations • 15 Nov 2022 • Michael Witbrock, Patrick Haffner
We present SVCnet, a system for modelling speaker variability.
no code implementations • COLING 2022 • Zhenyun Deng, Yonghua Zhu, Yang Chen, Qianqian Qi, Michael Witbrock, Patricia Riddle
In this paper, we propose the Prompt-based Conservation Learning (PCL) framework for multi-hop QA, which acquires new knowledge from multi-hop QA tasks while conserving old knowledge learned on single-hop QA tasks, mitigating forgetting.
1 code implementation • 28 Jul 2022 • Qiming Bao, Alex Yuxuan Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu
In our model, reasoning is performed using an iterative memory neural network based on RNN with a gated attention mechanism.
no code implementations • 16 Jun 2022 • Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, Patricia Riddle
We then achieve the decomposition of a multi-hop question via segmentation of the corresponding AMR graph based on the required reasoning type.
1 code implementation • Findings (ACL) 2022 • Nathan Young, Qiming Bao, Joshua Bensemann, Michael Witbrock
Transformers have recently been shown to be capable of reliably performing logical reasoning over facts and rules expressed in natural language, but abductive reasoning - inference to the best explanation of an unexpected observation - has been underexplored despite significant applications to scientific discovery, common-sense reasoning, and model interpretability.
no code implementations • 10 Dec 2021 • Dave Schneider, Michael Witbrock
In this paper, we discuss Semantic Construction Grammar (SCG), a system developed over the past several years to facilitate translation between natural language and logical representations.
no code implementations • 9 Dec 2021 • Joshua Bensemann, Qiming Bao, Gaël Gendron, Tim Hartill, Michael Witbrock
If we assume that artificial networks have no form of visual experience, then deficits caused by blindsight give us insights into the processes occurring within visual experience that we can incorporate into artificial neural networks.
no code implementations • 19 Nov 2021 • Lin Ni, Qiming Bao, Xiaoxuan Li, Qianqian Qi, Paul Denny, Jim Warren, Michael Witbrock, Jiamou Liu
We propose DeepQR, a novel neural-network model for AQQR that is trained using multiple-choice-question (MCQ) datasets collected from PeerWise, a widely-used learnersourcing platform.
no code implementations • 7 Jun 2021 • Ibrahim Abdelaziz, Maxwell Crouse, Bassem Makni, Vernon Austil, Cristina Cornelio, Shajith Ikbal, Pavan Kapanipathi, Ndivhuwo Makondo, Kavitha Srinivas, Michael Witbrock, Achille Fokoue
In addition, to the best of our knowledge, TRAIL is the first reinforcement learning-based approach to exceed the performance of a state-of-the-art traditional theorem prover on a standard theorem proving benchmark (solving up to 17% more problems).
no code implementations • 29 Apr 2021 • Yang Chen, Libo Zhang, Jiamou Liu, Michael Witbrock
However, existing IRL methods for MFGs are powerless to reason about uncertainties in demonstrated behaviours of individual agents.
1 code implementation • 5 Nov 2019 • Maxwell Crouse, Ibrahim Abdelaziz, Bassem Makni, Spencer Whitehead, Cristina Cornelio, Pavan Kapanipathi, Kavitha Srinivas, Veronika Thost, Michael Witbrock, Achille Fokoue
Automated theorem provers have traditionally relied on manually tuned heuristics to guide how they perform proof search.
no code implementations • WS 2019 • Siyu Huo, Tengfei Ma, Jie Chen, Maria Chang, Lingfei Wu, Michael Witbrock
Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations.
no code implementations • 12 Mar 2019 • Tian Gao, Jie Chen, Vijil Chenthamarakshan, Michael Witbrock
Though SSG is sequential in nature, it does not penalize the ordering of the appearance of the set elements and can be applied to a variety of set output problems, such as a set of classification labels or sequences.
no code implementations • 9 Jan 2019 • Maxwell Crouse, Achille Fokoue, Maria Chang, Pavan Kapanipathi, Ryan Musa, Constantine Nakos, Lingfei Wu, Kenneth Forbus, Michael Witbrock
Machine learning systems regularly deal with structured data in real-world applications.
BIG-bench Machine Learning Vocal Bursts Intensity Prediction
no code implementations • NIPS 2018 2018 • Lingfei Wu, Ian En-Hsu Yen, Kun Xu, Liang Zhao, Yinglong Xia, Michael Witbrock
Graph kernels are one of the most important methods for graph data analysis and have been successfully applied in diverse applications.
1 code implementation • 1 Dec 2018 • Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock
In this paper we formulate the attacks with discrete input on a set function as an optimization task.
no code implementations • AKBC 2019 • Ryan Musa, Xiaoyan Wang, Achille Fokoue, Nicholas Mattei, Maria Chang, Pavan Kapanipathi, Bassem Makni, Kartik Talamadupula, Michael Witbrock
Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques.
no code implementations • EMNLP 2018 • Michael Boratko, Harshit Padigela, Divyendra Mikkilineni, Pritish Yuvraj, Rajarshi Das, Andrew McCallum, Maria Chang, Achille Fokoue, Pavan Kapanipathi, Nicholas Mattei, Ryan Musa, Kartik Talamadupula, Michael Witbrock
Recent work introduces the AI2 Reasoning Challenge (ARC) and the associated ARC dataset that partitions open domain, complex science questions into an Easy Set and a Challenge Set.
no code implementations • 15 Sep 2018 • Ryan Musa, Xiaoyan Wang, Achille Fokoue, Nicholas Mattei, Maria Chang, Pavan Kapanipathi, Bassem Makni, Kartik Talamadupula, Michael Witbrock
Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques.
no code implementations • 15 Sep 2018 • Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, Michael Witbrock
To address this, we present a combination of techniques that harness knowledge graphs to improve performance on the NLI problem in the science questions domain.
1 code implementation • 14 Sep 2018 • Lingfei Wu, Ian En-Hsu Yen, Jin-Feng Yi, Fangli Xu, Qi Lei, Michael Witbrock
The proposed kernel does not suffer from the issue of diagonal dominance while naturally enjoys a \emph{Random Features} (RF) approximation, which reduces the computational complexity of existing DTW-based techniques from quadratic to linear in terms of both the number and the length of time-series.
no code implementations • WS 2018 • Michael Boratko, Harshit Padigela, Divyendra Mikkilineni, Pritish Yuvraj, Rajarshi Das, Andrew McCallum, Maria Chang, Achille Fokoue-Nkoutche, Pavan Kapanipathi, Nicholas Mattei, Ryan Musa, Kartik Talamadupula, Michael Witbrock
We propose a comprehensive set of definitions of knowledge and reasoning types necessary for answering the questions in the ARC dataset.
1 code implementation • CVPR 2018 • Wei Han, Shiyu Chang, Ding Liu, Mo Yu, Michael Witbrock, Thomas S. Huang
Advances in image super-resolution (SR) have recently benefited significantly from rapid developments in deep neural networks.
Ranked #42 on Image Super-Resolution on BSD100 - 4x upscaling
4 code implementations • ICLR 2019 • Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Michael Witbrock, Vadim Sheinin
Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.
Ranked #1 on SQL-to-Text on WikiSQL
no code implementations • 14 Feb 2018 • Lingfei Wu, Ian En-Hsu Yen, Fangli Xu, Pradeep Ravikumar, Michael Witbrock
For many machine learning problem settings, particularly with structured inputs such as sequences or sets of objects, a distance measure between inputs can be specified more naturally than a feature representation.
no code implementations • 4 Jan 2018 • Michael Witbrock, Marco Zagha
We describe a neural network simulator for the IBM GF11, an experimental SIMD machine with 566 processors and a peak arithmetic performance of 11 Gigaflops.
2 code implementations • NeurIPS 2017 • Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael Witbrock, Mark Hasegawa-Johnson, Thomas S. Huang
To provide a theory-based quantification of the architecture's advantages, we introduce a memory capacity measure, the mean recurrent length, which is more suitable for RNNs with long skip connections than existing measures.
Ranked #24 on Sequential Image Classification on Sequential MNIST
no code implementations • 14 Mar 2016 • Abhishek Sharma, Michael Witbrock, Keith Goolsbey
Results show that these methods lead to an order of magnitude reduction in inference time.