no code implementations • SMM4H (COLING) 2022 • Sumam Francis, Marie-Francine Moens
This paper describes models developed for the Social Media Mining for Health (SMM4H) 2022 shared tasks.
1 code implementation • WMT (EMNLP) 2021 • Mateusz Krubiński, Erfan Ghadery, Marie-Francine Moens, Pavel Pecina
In this paper, we show that automatically-generated questions and answers can be used to evaluate the quality of Machine Translation (MT) systems.
no code implementations • WMT (EMNLP) 2021 • Mateusz Krubiński, Erfan Ghadery, Marie-Francine Moens, Pavel Pecina
In this paper, we describe our submission to the WMT 2021 Metrics Shared Task.
no code implementations • ICML 2020 • Aristotelis Chrysakis, Marie-Francine Moens
Motivated by this remark, we aim to evaluate memory population methods that are used in online continual learning, when dealing with highly imbalanced and temporally correlated streams of data.
1 code implementation • 2 May 2024 • Wei Sun, Mingxiao Li, Jingyuan Sun, Jesse Davis, Marie-Francine Moens
Argument structure learning~(ASL) entails predicting relations between arguments.
1 code implementation • 27 Apr 2024 • Maria Mihaela Trusca, Tinne Tuytelaars, Marie-Francine Moens
Text-based semantic image editing assumes the manipulation of an image using a natural language instruction.
no code implementations • 21 Apr 2024 • Maria Mihaela Trusca, Wolf Nuyts, Jonathan Thomm, Robert Honig, Thomas Hofmann, Tinne Tuytelaars, Marie-Francine Moens
Current diffusion models create photorealistic images given a text prompt as input but struggle to correctly bind attributes mentioned in the text to the right objects in the image.
no code implementations • 31 Mar 2024 • Nathan Cornille, Marie-Francine Moens, Florian Mai
By training to predict the next token in an unlabeled corpus, large language models learn to perform many tasks without any labeled data.
1 code implementation • 25 Mar 2024 • Philipp Borchert, Jochen De Weerdt, Marie-Francine Moens
In this paper, we introduce a novel approach to enhance information extraction combining multiple sentence representations and contrastive learning.
no code implementations • 20 Mar 2024 • Shaonan Wang, Jingyuan Sun, Yunhao Zhang, Nan Lin, Marie-Francine Moens, Chengqing Zong
Despite differing from the human language processing mechanism in implementation and algorithms, current language models demonstrate remarkable human-like or surpassing language capabilities.
no code implementations • 15 Mar 2024 • Mingxiao Li, Bo Wan, Marie-Francine Moens, Tinne Tuytelaars
For the first time, we integrate both semantic and motion cues within a diffusion model for video generation, as demonstrated in Fig 1.
no code implementations • 14 Mar 2024 • Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens
Also when fine-tuning a pre-trained multimodal model such as CLIP-BART, we observe smaller but consistent improvements across a range of VL PEFT tasks.
no code implementations • 2 Feb 2024 • Jingyuan Sun, Mingxiao Li, Zijiao Chen, Marie-Francine Moens
In the pursuit to understand the intricacies of human brain's visual processing, reconstructing dynamic visual experiences from brain activities emerges as a challenging yet fascinating endeavor.
no code implementations • 25 Jan 2024 • Wolf Nuyts, Ruben Cartuyvels, Marie-Francine Moens
To test compositional understanding, we collect a test set of grammatically correct sentences and layouts describing compositions of entities and relations that unlikely have been seen during training.
no code implementations • 27 Nov 2023 • Liesbeth Allein, Maria Mihaela Truşcǎ, Marie-Francine Moens
The social and implicit nature of human communication ramifies readers' understandings of written sentences.
no code implementations • 6 Nov 2023 • Sumam Francis, Marie-Francine Moens
Here we also inject the POS tags into the model to increase the syntactic context of the model.
no code implementations • 6 Nov 2023 • Sumam Francis, Marie-Francine Moens
This paper presents models created for the Social Media Mining for Health 2023 shared task.
1 code implementation • 18 Oct 2023 • Philipp Borchert, Jochen De Weerdt, Kristof Coussement, Arno De Caigny, Marie-Francine Moens
To evaluate the performance of state-of-the-art RC models on the CORE dataset, we conduct experiments in the few-shot domain adaptation setting.
no code implementations • 5 Oct 2023 • Jingyuan Sun, Xiaohan Zhang, Marie-Francine Moens
To understand the algorithm that supports the human brain's language representation, previous research has attempted to predict neural responses to linguistic stimuli using embeddings generated by artificial neural networks (ANNs), a process known as neural encoding.
no code implementations • 3 Oct 2023 • Jingyuan Sun, Marie-Francine Moens
If so, what kind of NLU task leads a pre-trained model to better decode the information represented in the human brain?
no code implementations • 2 Oct 2023 • Wei Sun, Mingxiao Li, Damien Sileo, Jesse Davis, Marie-Francine Moens
Medical Question Answering~(medical QA) systems play an essential role in assisting healthcare workers in finding answers to their questions.
no code implementations • 30 Sep 2023 • Jingyuan Sun, Mingxiao Li, Marie-Francine Moens
Reconstructing visual stimuli from human brain activities provides a promising opportunity to advance our understanding of the brain's visual system and its connection with computer vision models.
1 code implementation • 20 Sep 2023 • Vladimir Araujo, Maria Mihaela Trusca, Rodrigo Tufiño, Marie-Francine Moens
In recent years, significant advancements in pre-trained language models have driven the creation of numerous non-English language variants, with a particular emphasis on encoder-only and decoder-only architectures.
no code implementations • 1 Sep 2023 • RuiQi Li, Liesbeth Allein, Damien Sileo, Marie-Francine Moens
The capabilities and use cases of automatic natural language processing (NLP) have grown significantly over the last few years.
no code implementations • 28 Aug 2023 • Andrei C. Coman, Christos Theodoropoulos, Marie-Francine Moens, James Henderson
Document-level relation extraction aims to identify relationships between entities within a document.
1 code implementation • 24 Aug 2023 • Jordy Van Landeghem, Sanket Biswas, Matthew B. Blaschko, Marie-Francine Moens
This paper highlights the need to bring document classification benchmarking closer to real-world applications, both in the nature of data tested ($X$: multi-channel, multi-paged, multi-industry; $Y$: class distributions and label set variety) and in classification tasks considered ($f$: multi-page document, page stream, and document bundle classification, ...).
1 code implementation • 16 Aug 2023 • Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens
News Image Captioning aims to create captions from news articles and images, emphasizing the connection between textual context and visual elements.
1 code implementation • ICCV 2023 • Gorjan Radevski, Dusan Grujicic, Marie-Francine Moens, Matthew Blaschko, Tinne Tuytelaars
The goal of this work is to retain the performance of such a multimodal approach, while using only the RGB frames as input at inference time.
1 code implementation • NeurIPS 2023 • Jingyuan Sun, Mingxiao Li, Zijiao Chen, Yunhao Zhang, Shaonan Wang, Marie-Francine Moens
The second phase tunes the feature learner to attend to neural activation patterns most informative for visual reconstruction with guidance from an image auto-encoder.
Ranked #1 on Brain Visual Reconstruction from fMRI on GOD
1 code implementation • 24 May 2023 • Mingxiao Li, Tingyu Qu, Ruicong Yao, Wei Sun, Marie-Francine Moens
In this work, we conduct a systematic study of exposure bias in DPM and, intriguingly, we find that the exposure bias could be alleviated with a novel sampling method that we propose, without retraining the model.
no code implementations • 12 May 2023 • Vladimir Araujo, Alvaro Soto, Marie-Francine Moens
Existing question answering methods often assume that the input content (e. g., documents or videos) is always accessible to solve the task.
1 code implementation • 27 Mar 2023 • Christos Theodoropoulos, Marie-Francine Moens
Current research on the advantages and trade-offs of using characters, instead of tokenized text, as input for deep learning models, has evolved substantially.
no code implementations • 13 Mar 2023 • Damien Sileo, Kanimozhi Uma, Marie-Francine Moens
Medical multiple-choice question answering (MCQA) is particularly difficult.
1 code implementation • 24 Feb 2023 • Liesbeth Allein, Marlon Saelens, Ruben Cartuyvels, Marie-Francine Moens
Our findings show that the presence of temporal information and the manner in which timelines are constructed greatly influence how fact-checking models determine the relevance and supporting or refuting character of evidence documents.
no code implementations • 12 Dec 2022 • Arthur Van Meerbeeck, Jordy Van Landeghem, Ruben Cartuyvels, Marie-Francine Moens
The integration of the optimizations with the object detection model leads to a trade-off between speed and performance.
1 code implementation • 30 Nov 2022 • Mingxiao Li, Zehao Wang, Tinne Tuytelaars, Marie-Francine Moens
In this work, we study the problem of Embodied Referring Expression Grounding, where an agent needs to navigate in a previously unseen environment and localize a remote object described by a concise high-level natural language instruction.
no code implementations • 24 Nov 2022 • Quentin Meeus, Marie-Francine Moens, Hugo Van hamme
We explore the benefits that multitask learning offer to speech processing as we train models on dual objectives with automatic speech recognition and intent classification or sentiment classification.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +7
no code implementations • 24 Nov 2022 • Quentin Meeus, Marie-Francine Moens, Hugo Van hamme
Class attention can be used to visually explain the predictions of our model, which goes a long way in understanding how the model makes predictions.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 7 Nov 2022 • Damien Sileo, Marie-Francine Moens
and assess whether language models can predict whether the WEP consensual probability level is close to p. Secondly, we construct a dataset of WEP-based probabilistic reasoning, to test whether language models can reason with WEP compositions.
Ranked #1 on Natural Language Inference on Probability words NLI
1 code implementation • 17 Oct 2022 • Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens
We revisit the weakly supervised cross-modal face-name alignment task; that is, given an image and a caption, we label the faces in the image with the names occurring in the caption.
no code implementations • 9 Oct 2022 • Gorjan Radevski, Dusan Grujicic, Matthew Blaschko, Marie-Francine Moens, Tinne Tuytelaars
Our approach is based on multimodal knowledge distillation, featuring a multimodal teacher (in the current experiments trained only using object detections, optical flow and RGB frames) and a unimodal student (using only RGB frames as input).
no code implementations • 3 Oct 2022 • Vladimir Araujo, Helena Balabin, Julio Hurtado, Alvaro Soto, Marie-Francine Moens
Lifelong language learning seeks to have models continuously learn multiple tasks in a sequential order without suffering from catastrophic forgetting.
no code implementations • Findings (NAACL) 2022 • Chang Tian, Wenpeng Yin, Marie-Francine Moens
This problem is detrimental to RL-based dialogue policy learning.
1 code implementation • 18 Apr 2022 • Vladimir Araujo, Julio Hurtado, Alvaro Soto, Marie-Francine Moens
The ability to continuously learn remains elusive for deep learning models.
1 code implementation • LREC 2022 • Vladimir Araujo, Andrés Carvallo, Souvik Kundu, José Cañete, Marcelo Mendoza, Robert E. Mercer, Felipe Bravo-Marquez, Marie-Francine Moens, Alvaro Soto
Due to the success of pre-trained language models, versions of languages other than English have been released in recent years.
1 code implementation • ACL 2022 • Victor Milewski, Miryam de Lhoneux, Marie-Francine Moens
In this work, we investigate the knowledge learned in the embeddings of multimodal-BERT models.
no code implementations • 7 Mar 2022 • Zehao Wang, Mingxiao Li, Minye Wu, Marie-Francine Moens, Tinne Tuytelaars
In this paper, we introduce the map-language navigation task where an agent executes natural language instructions and moves to the target position based only on a given 3D semantic map.
1 code implementation • 6 Mar 2022 • Mingxiao Li, Marie-Francine Moens
Knowledge-based visual question answering (VQA) is a vision-language task that requires an agent to correctly answer image-related questions using knowledge that is not presented in the given image.
no code implementations • EACL 2021 • Mingxiao Li, Marie-Francine Moens
Visual dialog is a vision-language task where an agent needs to answer a series of questions grounded in an image based on the understanding of the dialog history and the image.
no code implementations • 4 Jan 2022 • Ruben Cartuyvels, Graham Spinks, Marie-Francine Moens
Motivated by these insights, in this paper we argue that combining discrete and continuous representations and their processing will be essential to build systems that exhibit a general form of intelligence.
1 code implementation • 29 Dec 2021 • Farjad Malik, Simon Wouters, Ruben Cartuyvels, Erfan Ghadery, Marie-Francine Moens
As a result, they obtain good performance for a few majority classes but poor performance for many minority classes.
1 code implementation • LREC 2022 • Damien Sileo, Marie-Francine Moens
Task embeddings are low-dimensional representations that are trained to capture task properties.
1 code implementation • 10 Dec 2021 • Dusan Grujicic, Thierry Deruyttere, Marie-Francine Moens, Matthew Blaschko
However, surveys have shown that giving more control to an AI in self-driving cars is accompanied by a degree of uneasiness by passengers.
1 code implementation • 2 Nov 2021 • Gorjan Radevski, Marie-Francine Moens, Tinne Tuytelaars
Recognizing human actions is fundamentally a spatio-temporal reasoning problem, and should be, at least to some extent, invariant to the appearance of the human and the objects involved.
Ranked #36 on Action Classification on Charades
no code implementations • EMNLP 2021 • Vladimir Araujo, Andrés Villa, Marcelo Mendoza, Marie-Francine Moens, Alvaro Soto
Current language models are usually trained using a self-supervised scheme, where the main focus is learning representations at the word or sentence level.
1 code implementation • CoNLL (EMNLP) 2021 • Christos Theodoropoulos, James Henderson, Andrei C. Coman, Marie-Francine Moens
Though language model text embeddings have revolutionized NLP research, their ability to capture high-level semantic information, such as relations between entities in text, is limited.
Ranked #10 on Relation Extraction on Adverse Drug Events (ADE) Corpus
no code implementations • 31 Aug 2021 • Liesbeth Allein, Marie-Francine Moens, Domenico Perrotta
The latent representations of news articles and user-generated content allow that during training the model is guided by the profile of users who prefer content similar to the news article that is evaluated, and this effect is reinforced if that content is shared among different users.
no code implementations • 13 Jun 2021 • Jaron Maene, Mingxiao Li, Marie-Francine Moens
The lottery ticket hypothesis states that sparse subnetworks exist in randomly initialized dense networks that can be trained to the same accuracy as the dense network they reside in.
1 code implementation • 8 Jun 2021 • Thierry Deruyttere, Victor Milewski, Marie-Francine Moens
This paper proposes a model that detects uncertain situations when a command is given and finds the visual objects causing it.
no code implementations • SEMEVAL 2021 • Erfan Ghadery, Damien Sileo, Marie-Francine Moens
We describe our approach for SemEval-2021 task 6 on detection of persuasion techniques in multimodal content (memes).
1 code implementation • COLING 2020 • Ruben Cartuyvels, Graham Spinks, Marie-Francine Moens
This paper proposes an iterative inference algorithm for multi-hop explanation regeneration, that retrieves relevant factual evidence in the form of text snippets, given a natural language question and its answer.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Gorjan Radevski, Guillem Collell, Marie-Francine Moens, Tinne Tuytelaars
We address the problem of multimodal spatial understanding by decoding a set of language-expressed spatial relations to a set of 2D spatial arrangements in a multi-object and multi-relationship setting.
no code implementations • EMNLP 2020 • Parisa Kordjamshidi, James Pustejovsky, Marie-Francine Moens
Understating spatial semantics expressed in natural language can become highly complex in real-world applications.
1 code implementation • 28 Sep 2020 • Artuur Leeuwenberg, Marie-Francine Moens
Temporal information extraction is a challenging but important area of automatic natural language understanding.
Natural Language Understanding Temporal Information Extraction
1 code implementation • Asian Chapter of the Association for Computational Linguistics 2020 • Victor Milewski, Marie-Francine Moens, Iacer Calixto
Overall, we find no significant difference between models that use scene graph features and models that only use object detection features across different captioning metrics, which suggests that existing scene graph generation models are still too noisy to be useful in image captioning.
no code implementations • 18 Sep 2020 • Thierry Deruyttere, Simon Vandenhende, Dusan Grujicic, Yu Liu, Luc van Gool, Matthew Blaschko, Tinne Tuytelaars, Marie-Francine Moens
In this work, we deviate from recent, popular task settings and consider the problem under an autonomous vehicle scenario.
Ranked #3 on Referring Expression Comprehension on Talk2Car
no code implementations • 10 Sep 2020 • Liesbeth Allein, Isabelle Augenstein, Marie-Francine Moens
Truth can vary over time.
no code implementations • 20 Aug 2020 • Liesbeth Allein, Marie-Francine Moens
Public, professional and academic interest in automated fact-checking has drastically increased over the past decade, with many aiming to automate one of the first steps in a fact-check procedure: the selection of so-called checkworthy claims.
no code implementations • 7 Jul 2020 • Graham Spinks, Marie-Francine Moens
The paper proposes a novel technique for representing templates and instances of concept classes.
1 code implementation • NeurIPS 2020 • Dario Pavllo, Graham Spinks, Thomas Hofmann, Marie-Francine Moens, Aurelien Lucchi
A key contribution of our work is the encoding of the mesh and texture as 2D representations, which are semantically aligned and can be easily modeled by a 2D convolutional GAN.
no code implementations • 13 May 2020 • Artuur Leeuwenberg, Marie-Francine Moens
Time is deeply woven into how people perceive, and communicate about the world.
Natural Language Understanding Temporal Information Extraction
no code implementations • SEMEVAL 2020 • Erfan Ghadery, Marie-Francine Moens
We adapt and fine-tune the BERT and Multilingual Bert models made available by Google AI for English and non-English languages respectively.
no code implementations • 19 Mar 2020 • Thierry Deruyttere, Guillem Collell, Marie-Francine Moens
We propose a new spatial memory module and a spatial reasoner for the Visual Grounding (VG) task.
Ranked #10 on Referring Expression Comprehension on Talk2Car
no code implementations • 9 Jan 2020 • Liesbeth Allein, Artuur Leeuwenberg, Marie-Francine Moens
Drawing on previous research conducted on neural context-dependent dt-mistake correction models (Heyman et al. 2018), this study constructs the first neural network model for Dutch demonstrative and relative pronoun resolution that specifically focuses on the correction and part-of-speech prediction of these two pronouns.
no code implementations • 5 Jan 2020 • Golnoosh Farnadi, Lise Getoor, Marie-Francine Moens, Martine De Cock
In this paper, we propose a mechanism to infer a variety of user characteristics, such as, age, gender and personality traits, which can then be compiled into a user profile.
no code implementations • 25 Sep 2019 • Graham Spinks, Marie-Francine Moens
We introduce a new deep learning technique that builds individual and class representations based on distance estimates to randomly generated contextual dimensions for different modalities.
1 code implementation • IJCNLP 2019 • Thierry Deruyttere, Simon Vandenhende, Dusan Grujicic, Luc van Gool, Marie-Francine Moens
Or more specifically, we consider the problem in an autonomous driving setting, where a passenger requests an action that can be associated with an object found in a street scene.
no code implementations • 28 Aug 2019 • Katrien Laenen, Marie-Francine Moens
This paper describes an attention-based fusion method for outfit recommendation which fuses the information in the product image and description to capture the most important, fine-grained product features into the item representation.
no code implementations • 12 Jul 2019 • Graham Spinks, Marie-Francine Moens
This textual representation is decoded into a diagnosis and the associated textual justification that will help a clinician evaluate the outcome.
no code implementations • NAACL 2019 • Geert Heyman, Bregt Verreet, Ivan Vuli{\'c}, Marie-Francine Moens
We learn a shared multilingual embedding space for a variable number of languages by incrementally adding new languages one by one to the current multilingual space.
Bilingual Lexicon Induction Cross-Lingual Word Embeddings +4
no code implementations • WS 2018 • Graham Spinks, Marie-Francine Moens
The method is illustrated on a medical dataset where the correct representation of spatial information and shorthands are of particular importance.
1 code implementation • EMNLP 2018 • Artuur Leeuwenberg, Marie-Francine Moens
The current leading paradigm for temporal information extraction from text consists of three phases: (1) recognition of events and temporal expressions, (2) recognition of temporal relations among them, and (3) time-line construction from the temporal relations.
1 code implementation • COLING 2018 • Artuur Leeuwenberg, Marie-Francine Moens
In this work, we extend our classification model's task loss with an unsupervised auxiliary loss on the word-embedding level of the model.
no code implementations • COLING 2018 • Quynh Ngoc Thi Do, Artuur Leeuwenberg, Geert Heyman, Marie-Francine Moens
This paper presents a flexible and open source framework for deep semantic role labeling.
no code implementations • ACL 2018 • Guillem Collell, Marie-Francine Moens
Feed-forward networks are widely used in cross-modal applications to bridge modalities by mapping distributed vectors of one modality to the other, or to a shared space.
no code implementations • NAACL 2018 • Graham Spinks, Marie-Francine Moens
During training the input to the system is a dataset of captions for medical X-Rays.
1 code implementation • TACL 2018 • Guillem Collell, Marie-Francine Moens
Here, we move one step forward in this direction and learn such representations by leveraging a task consisting in predicting continuous 2D spatial arrangements of objects given object-relationship-object instances (e. g., {``}cat under chair{''}) and a simple neural network model that learns the task from annotated images.
1 code implementation • 18 Nov 2017 • Guillem Collell, Luc van Gool, Marie-Francine Moens
In contrast with prior work that restricts spatial templates to explicit spatial prepositions (e. g., "glass on table"), here we extend this concept to implicit spatial language, i. e., those relationships (generally actions) for which the spatial arrangement of the objects is only implicitly implied (e. g., "man riding horse").
no code implementations • 13 Oct 2017 • Fang Zhang, Xiaochen Wang, Jingfei Han, Jie Tang, Shiyin Wang, Marie-Francine Moens
We leverage a large-scale knowledge base (Wikipedia) to generate topic embeddings using neural networks and use this kind of representations to help capture the representativeness of topics for given areas.
1 code implementation • SEMEVAL 2017 • Artuur Leeuwenberg, Marie-Francine Moens
In this paper, we describe the system of the KULeuven-LIIR submission for Clinical TempEval 2017.
1 code implementation • 1 May 2017 • Ted Zhang, Dengxin Dai, Tinne Tuytelaars, Marie-Francine Moens, Luc van Gool
This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • IJCNLP 2017 • Quynh Ngoc Thi Do, Steven Bethard, Marie-Francine Moens
Implicit semantic role labeling (iSRL) is the task of predicting the semantic roles of a predicate that do not appear as explicit arguments, but rather regard common sense knowledge or are mentioned earlier in the discourse.
1 code implementation • EACL 2017 • Artuur Leeuwenberg, Marie-Francine Moens
We propose a scalable structured learning model that jointly predicts temporal relations between events and temporal expressions (TLINKS), and the relation between these events and the document creation time (DCTR).
no code implementations • EACL 2017 • Geert Heyman, Ivan Vuli{\'c}, Marie-Francine Moens
We study the problem of bilingual lexicon induction (BLI) in a setting where some translation resources are available, but unknown translations are sought for certain, possibly domain-specific terminology.
no code implementations • WS 2017 • Aparna Nurani Venkitasubramanian, Tinne Tuytelaars, Marie-Francine Moens
We investigate animal recognition models learned from wildlife video documentaries by using the weak supervision of the textual subtitles.
no code implementations • 25 Mar 2017 • Guillem Collell, Teddy Zhang, Marie-Francine Moens
Integrating visual and linguistic information into a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision.
no code implementations • COLING 2016 • Quynh Ngoc Thi Do, Steven Bethard, Marie-Francine Moens
We present a successful collaboration of word embeddings and co-training to tackle in the most difficult test case of semantic role labeling: predicting out-of-domain and unseen semantic frames.
no code implementations • WS 2016 • Shurong Sheng, Luc van Gool, Marie-Francine Moens
In this paper, we introduce the construction of a golden standard dataset that will aid research of multimodal question answering in the cultural heritage domain.
no code implementations • COLING 2016 • Guillem Collell, Marie-Francine Moens
Human concept representations are often grounded with visual information, yet some aspects of meaning cannot be visually represented or are better described with language.
no code implementations • LREC 2016 • Niraj Shrestha, Marie-Francine Moens
We still lack suitable corpora of transcribed speech annotated with semantic roles that can be used for semantic role labeling (SRL), which is not the case for written data.
no code implementations • 28 Mar 2016 • Oswaldo Ludwig, Xiao Liu, Parisa Kordjamshidi, Marie-Francine Moens
This paper introduces the visually informed embedding of word (VIEW), a continuous vector representation for a word extracted from a deep neural model trained using the Microsoft COCO data set to forecast the spatial arrangements between visual objects, given a textual description.
no code implementations • 24 Sep 2015 • Ivan Vulić, Marie-Francine Moens
We propose a new model for learning bilingual word representations from non-parallel document-aligned data.
no code implementations • LREC 2014 • Goran Glava{\v{s}}, Jan {\v{S}}najder, Marie-Francine Moens, Parisa Kordjamshidi
In this work, we present HiEve, a corpus for recognizing relations of spatiotemporal containment between events.
no code implementations • LREC 2012 • Steven Bethard, Oleks Kolomiyets, R, Marie-Francine Moens
We present an approach to annotating timelines in stories where events are linked together by temporal relations into a temporal dependency tree.