no code implementations • WNUT (ACL) 2021 • Thomas Clark, Costanza Conforti, Fangyu Liu, Zaiqiao Meng, Ehsan Shareghi, Nigel Collier
Stance detection (SD) entails classifying the sentiment of a text towards a given target, and is a relevant sub-task for opinion mining and social media analysis.
2 code implementations • Proceedings of the AAAI Conference on Artificial Intelligence 2021 • Jinpeng Wang, Bin Chen, Qiang Zhang, Zaiqiao Meng, Shangsong Liang, Shu-Tao Xia
Deep quantization methods have shown high efficiency on large-scale image retrieval.
no code implementations • 4 Mar 2024 • Giacomo Frisoni, Alessio Cocchieri, Alex Presepi, Gianluca Moro, Zaiqiao Meng
Medical open-domain question answering demands substantial access to specialized knowledge.
no code implementations • 22 Feb 2024 • Zijun Long, George Killick, Lipeng Zhuang, Gerardo Aragon-Camarasa, Zaiqiao Meng, Richard McCreadie
State-of-the-art pre-trained image models predominantly adopt a two-stage approach: initial unsupervised pre-training on large-scale datasets followed by task-specific fine-tuning using Cross-Entropy loss~(CE).
1 code implementation • 25 Oct 2023 • Guanzheng Chen, Xin Li, Zaiqiao Meng, Shangsong Liang, Lidong Bing
We generalise the PE scaling approaches to model the continuous dynamics by ordinary differential equations over the length scaling factor, thereby overcoming the constraints of current PE scaling methods designed for specific lengths.
1 code implementation • 24 Oct 2023 • Panfeng Cao, Ye Wang, Qiang Zhang, Zaiqiao Meng
Key information extraction (KIE) from scanned documents has gained increasing attention because of its applications in various domains.
1 code implementation • 31 Aug 2023 • Yupan Huang, Zaiqiao Meng, Fangyu Liu, Yixuan Su, Nigel Collier, Yutong Lu
Our experiments validate the effectiveness of SparklesChat in understanding and reasoning across multiple images and dialogue turns.
no code implementations • 28 Aug 2023 • Zijun Long, George Killick, Richard McCreadie, Gerardo Aragon Camarasa, Zaiqiao Meng
State-of-the-art image models predominantly follow a two-stage strategy: pre-training on large datasets and fine-tuning with cross-entropy loss.
no code implementations • 27 Aug 2023 • Yu Wang, Xin Xin, Zaiqiao Meng, Joemon Jose, Fuli Feng
We employ the proposed DeCA on both the binary label scenario and the multiple label scenario.
1 code implementation • 23 May 2023 • Zihao Fu, Meiru Zhang, Zaiqiao Meng, Yannan Shen, David Buckeridge, Nigel Collier
Infectious disease outbreaks continue to pose a significant threat to human health and well-being.
1 code implementation • 22 May 2023 • Zihao Fu, Yixuan Su, Zaiqiao Meng, Nigel Collier
To alleviate the need of human effort, dictionary-based approaches have been proposed to extract named entities simply based on a given dictionary.
1 code implementation • 31 Mar 2023 • Zijun Long, Zaiqiao Meng, Gerardo Aragon Camarasa, Richard McCreadie
Vision Transformers (ViTs) have emerged as popular models in computer vision, demonstrating state-of-the-art performance across various tasks.
no code implementations • 25 Mar 2023 • Meiru Zhang, Yixuan Su, Zaiqiao Meng, Zihao Fu, Nigel Collier
In this study, we consider a more realistic setting of this task, namely the Oracle-Free Event Extraction (OFEE) task, where only the input context is given without any oracle information, including event type, event ontology and trigger word.
1 code implementation • 1 Feb 2023 • Muhammad Arslan Manzoor, Sarah Albarri, Ziting Xian, Zaiqiao Meng, Preslav Nakov, Shangsong Liang
This survey presents the comprehensive literature on the evolution and enhancement of deep learning multimodal architectures to deal with textual, visual and audio features for diverse cross-modal and modern multimodal tasks.
no code implementations • 7 Nov 2022 • Jiahang Cao, Jinyuan Fang, Zaiqiao Meng, Shangsong Liang
Particularly, we build a fine-grained classification to categorise the models based on three mathematical perspectives of the representation spaces: (1) Algebraic perspective, (2) Geometric perspective, and (3) Analytical perspective.
1 code implementation • 12 Oct 2022 • Zhangdie Yuan, Songbo Hu, Ivan Vulić, Anna Korhonen, Zaiqiao Meng
Acquiring factual knowledge with Pretrained Language Models (PLMs) has attracted increasing attention, showing promising performance in many knowledge-intensive tasks.
1 code implementation • 16 Feb 2022 • Guanzheng Chen, Fangyu Liu, Zaiqiao Meng, Shangsong Liang
Parameter-Efficient Tuning (PETuning) methods have been deemed by many as the new paradigm for using pretrained language models (PLMs).
no code implementations • NeurIPS 2021 • Qiang Zhang, Jinyuan Fang, Zaiqiao Meng, Shangsong Liang, Emine Yilmaz
Conventional meta-learning considers a set of tasks from a stationary distribution.
no code implementations • NeurIPS 2021 • Jinyuan Fang, Qiang Zhang, Zaiqiao Meng, Shangsong Liang
Gaussian Processes (GPs) define distributions over functions and their generalization capabilities depend heavily on the choice of kernels.
2 code implementations • Findings (NAACL) 2022 • Yixuan Su, Fangyu Liu, Zaiqiao Meng, Tian Lan, Lei Shu, Ehsan Shareghi, Nigel Collier
Masked language models (MLMs) such as BERT and RoBERTa have revolutionized the field of Natural Language Understanding in the past few years.
1 code implementation • ACL 2022 • Zaiqiao Meng, Fangyu Liu, Ehsan Shareghi, Yixuan Su, Charlotte Collins, Nigel Collier
To catalyse the research in this direction, we release a well-curated biomedical knowledge probing benchmark, MedLAMA, which is constructed based on the Unified Medical Language System (UMLS) Metathesaurus.
1 code implementation • EMNLP 2021 • Zaiqiao Meng, Fangyu Liu, Thomas Hikaru Clark, Ehsan Shareghi, Nigel Collier
Infusing factual knowledge into pre-trained models is fundamental for many knowledge-intensive tasks.
1 code implementation • Findings (EMNLP) 2021 • Yixuan Su, Zaiqiao Meng, Simon Baker, Nigel Collier
Neural table-to-text generation models have achieved remarkable progress on an array of tasks.
no code implementations • 8 Jul 2021 • Zaiqiao Meng, Siwei Liu, Craig Macdonald, Iadh Ounis
For the GCN-P model, two single-relational graphs are constructed from all the users' and items' side information respectively, to pre-train entity representations by using the Graph Convolutional Networks.
no code implementations • 20 May 2021 • Yu Wang, Xin Xin, Zaiqiao Meng, Xiangnan He, Joemon Jose, Fuli Feng
A noisy negative example which is uninteracted because of unawareness of the user could also denote potential positive user preference.
1 code implementation • NAACL 2021 • Fangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, Nigel Collier
Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge.
no code implementations • 26 Jul 2020 • Zaiqiao Meng, Richard McCreadie, Craig Macdonald, Iadh Ounis
In this paper, we both show that there is no standard splitting strategy and that the selection of splitting strategy can have a strong impact on the ranking of recommender systems.
1 code implementation • NeurIPS 2019 • Zaiqiao Meng, Shangsong Liang, Jinyuan Fang, Teng Xiao
Deep generative models (DGMs) have achieved remarkable advances.
1 code implementation • 17 Sep 2019 • Zaiqiao Meng, Richard McCreadie, Craig Macdonald, Iadh Ounis
We train our VBCAR model based on the Bayesian Skip-gram framework coupled with the amortized variational inference so that it can learn more expressive latent representations that integrate both the non-linearity and Bayesian behaviour.
no code implementations • 12 Oct 2018 • Teng Xiao, Shangsong Liang, Hong Shen, Zaiqiao Meng
Specifically, we consider both the generative processes of users and items, and the prior of latent factors of users and items to be side informationspecific, which enables our model to alleviate matrix sparsity and learn better latent representations of users and items.