no code implementations • Findings (EMNLP) 2021 • Chul Sung, Vaibhava Goel, Etienne Marcheret, Steven Rennie, David Nahamoo
More importantly our fine-tuned CoNLL2003 model displays significant gains in generalization to out of domain datasets: on the OntoNotes subset we achieve an F1 of 72. 67 which is 0. 49 points absolute better than the baseline, and on the WNUT16 set an F1 of 68. 22 which is a gain of 0. 48 points.
no code implementations • IJCNLP 2019 • Chul Sung, Tejas Dhamecha, Swarnadeep Saha, Tengfei Ma, Vinay Reddy, Rishi Arora
Pre-trained BERT contextualized representations have achieved state-of-the-art results on multiple downstream NLP tasks by fine-tuning with task-specific data.