no code implementations • ACL 2022 • Mostafa Abdou, Vinit Ravishankar, Artur Kulmizev, Anders Søgaard
Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information.
no code implementations • 21 Mar 2022 • Vinit Ravishankar, Mostafa Abdou, Artur Kulmizev, Anders Søgaard
Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information.
no code implementations • 17 Oct 2021 • Artur Kulmizev, Joakim Nivre
In the last half-decade, the field of natural language processing (NLP) has undergone two major transitions: the switch to neural networks as the primary modeling paradigm and the homogenization of the training regime (pre-train, then fine-tune).
no code implementations • CoNLL (EMNLP) 2021 • Mostafa Abdou, Artur Kulmizev, Daniel Hershcovich, Stella Frank, Ellie Pavlick, Anders Søgaard
Pretrained language models have been shown to encode relational information, such as the relations between entities or concepts in knowledge-bases -- (Paris, Capital, France).
no code implementations • EACL 2021 • Vinit Ravishankar, Artur Kulmizev, Mostafa Abdou, Anders Søgaard, Joakim Nivre
Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism.
no code implementations • ACL 2021 • Ziyang Luo, Artur Kulmizev, Xiaoxi Mao
In this work, we demonstrate that the contextualized word vectors derived from pretrained masked language model-based encoders share a common, perhaps undesirable pattern across layers.
no code implementations • WS 2020 • Daniel Hershcovich, Miryam de Lhoneux, Artur Kulmizev, Elham Pejhan, Joakim Nivre
We present K{\o}psala, the Copenhagen-Uppsala system for the Enhanced Universal Dependencies Shared Task at IWPT 2020.
1 code implementation • 25 May 2020 • Daniel Hershcovich, Miryam de Lhoneux, Artur Kulmizev, Elham Pejhan, Joakim Nivre
We present K{\o}psala, the Copenhagen-Uppsala system for the Enhanced Universal Dependencies Shared Task at IWPT 2020.
no code implementations • ACL 2020 • Artur Kulmizev, Vinit Ravishankar, Mostafa Abdou, Joakim Nivre
Recent work on the interpretability of deep neural language models has concluded that many properties of natural language syntax are encoded in their representational spaces.
no code implementations • IJCNLP 2019 • Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, Joakim Nivre
Transition-based and graph-based dependency parsers have previously been shown to have complementary strengths and weaknesses: transition-based parsers exploit rich structural features but suffer from error propagation, while graph-based parsers benefit from global optimization but have restricted feature scope.
no code implementations • IJCNLP 2019 • Mostafa Abdou, Artur Kulmizev, Felix Hill, Daniel M. Low, Anders Søgaard
Representational Similarity Analysis (RSA) is a technique developed by neuroscientists for comparing activity patterns of different measurement modalities (e. g., fMRI, electrophysiology, behavior).
no code implementations • 20 Aug 2019 • Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, Joakim Nivre
Transition-based and graph-based dependency parsers have previously been shown to have complementary strengths and weaknesses: transition-based parsers exploit rich structural features but suffer from error propagation, while graph-based parsers benefit from global optimization but have restricted feature scope.
no code implementations • EMNLP 2018 • Mostafa Abdou, Artur Kulmizev, Vinit Ravishankar, Lasha Abzianidze, Johan Bos
We investigate the effects of multi-task learning using the recently introduced task of semantic tagging.
no code implementations • SEMEVAL 2018 • Mostafa Abdou, Artur Kulmizev, Joan Gin{\'e}s i Ametll{\'e}
In this paper we describe our submission to SemEval-2018 Task 1: Affects in Tweets.
no code implementations • SEMEVAL 2018 • Artur Kulmizev, Mostafa Abdou, Vinit Ravishankar, Malvina Nissim
We participated to the SemEval-2018 shared task on capturing discriminative attributes (Task 10) with a simple system that ranked 8th amongst the 26 teams that took part in the evaluation.
no code implementations • WS 2017 • Artur Kulmizev, Bo Blankers, Johannes Bjerva, Malvina Nissim, Gertjan van Noord, Barbara Plank, Martijn Wieling
In this paper, we explore the performance of a linear SVM trained on language independent character features for the NLI Shared Task 2017.