2 code implementations • 5 Apr 2024 • Fred Philippy, Shohreh Haddadan, Siwen Guo
A common method for ZSC is to fine-tune a language model on a Natural Language Inference (NLI) dataset and then use it to infer the entailment between the input document and the target labels.
1 code implementation • 6 Feb 2024 • Fred Philippy, Siwen Guo, Shohreh Haddadan, Cedric Lothritz, Jacques Klein, Tegawendé F. Bissyandé
Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained language models (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters.
no code implementations • 26 May 2023 • Fred Philippy, Siwen Guo, Shohreh Haddadan
To enhance the structure of this review and to facilitate consolidation with future studies, we identify five categories of such factors.
1 code implementation • 3 May 2023 • Fred Philippy, Siwen Guo, Shohreh Haddadan
Prior research has investigated the impact of various linguistic features on cross-lingual transfer performance.
no code implementations • CONLL 2019 • Siwen Guo, Sviatlana H{\"o}hn, Christoph Schommer
In this paper, we look beyond the traditional population-level sentiment modeling and consider the individuality in a person{'}s expressions by discovering both textual and contextual information.