Search Results for author: Damian Pascual

Found 11 papers, 6 papers with code

On Isotropy Calibration of Transformer Models

no code implementations insights (ACL) 2022 Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer

Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone.

pNLP-Mixer: an Efficient all-MLP Architecture for Language

1 code implementation9 Feb 2022 Francesco Fusco, Damian Pascual, Peter Staar, Diego Antognini

Large pre-trained language models based on transformer architecture have drastically changed the natural language processing (NLP) landscape.

intent-classification Intent Classification +3

On Isotropy Calibration of Transformers

no code implementations27 Sep 2021 Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer

Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone.

Towards BERT-based Automatic ICD Coding: Limitations and Opportunities

no code implementations NAACL (BioNLP) 2021 Damian Pascual, Sandro Luck, Roger Wattenhofer

Unlike the general trend in language processing, no transformer model has been reported to reach high performance on this task.

Of Non-Linearity and Commutativity in BERT

1 code implementation12 Jan 2021 Sumu Zhao, Damian Pascual, Gino Brunner, Roger Wattenhofer

In this work we provide new insights into the transformer architecture, and in particular, its best-known variant, BERT.

Inductive Bias

Directed Beam Search: Plug-and-Play Lexically Constrained Language Generation

1 code implementation31 Dec 2020 Damian Pascual, Beni Egressy, Florian Bolli, Roger Wattenhofer

Given that state-of-the-art language models are too large to be trained from scratch in a manageable time, it is desirable to control these models without re-training them.

Language Modelling Machine Translation +2

Brain2Word: Decoding Brain Activity for Language Generation

1 code implementation10 Sep 2020 Nicolas Affolter, Beni Egressy, Damian Pascual, Roger Wattenhofer

In the case of language stimuli, recent studies have shown that it is possible to decode fMRI scans into an embedding of the word a subject is reading.

Brain Decoding Text Generation +1

Medley2K: A Dataset of Medley Transitions

no code implementations25 Aug 2020 Lukas Faber, Sandro Luck, Damian Pascual, Andreas Roth, Gino Brunner, Roger Wattenhofer

The automatic generation of medleys, i. e., musical pieces formed by different songs concatenated via smooth transitions, is not well studied in the current literature.

Telling BERT's full story: from Local Attention to Global Aggregation

no code implementations EACL 2021 Damian Pascual, Gino Brunner, Roger Wattenhofer

This way, we propose a distinction between local patterns revealed by attention and global patterns that refer back to the input, and analyze BERT from both angles.

Synthetic Epileptic Brain Activities Using Generative Adversarial Networks

1 code implementation22 Jul 2019 Damian Pascual, Amir Aminifar, David Atienza, Philippe Ryvlin, Roger Wattenhofer

In this work, we generate synthetic seizure-like brain electrical activities, i. e., EEG signals, that can be used to train seizure detection algorithms, alleviating the need for recorded data.

EEG Generative Adversarial Network +1

Cannot find the paper you are looking for? You can Submit a new open access paper.