1 code implementation • 4 Apr 2024 • Walid Bousselham, Angie Boggust, Sofian Chaybouti, Hendrik Strobelt, Hilde Kuehne
Vision Transformers (ViTs), with their ability to model long-range dependencies through self-attention mechanisms, have become a standard architecture in computer vision.
no code implementations • 6 Jan 2021 • Sofian Chaybouti, Achraf Saghe, Aymen Shabou
This new model, that we call EfficientQA, takes advantage from the pair of sequences kind of input of BERT-based models to build meaningful dense representations of candidate answers.
Ranked #124 on Question Answering on SQuAD1.1
Extractive Question-Answering Natural Language Understanding +1
no code implementations • 17 Dec 2020 • Sofian Chaybouti, Achraf Saghe, Aymen Shabou
In this paper, we introduce MIX : a multi-task deep learning approach to solve Open-Domain Question Answering.