no code implementations • 26 Mar 2024 • Linyang He, Peili Chen, Ercong Nie, Yuanning Li, Jonathan R. Brennan
4) For Transformer-based models, both embeddings and attentions capture grammatical features but show distinct patterns.
no code implementations • 28 Feb 2024 • Ercong Nie, Shuzhou Yuan, Bolei Ma, Helmut Schmid, Michael Färber, Frauke Kreuter, Hinrich Schütze
Despite the predominance of English in their training data, English-centric Large Language Models (LLMs) like GPT-3 and LLaMA display a remarkable ability to perform multilingual tasks, raising questions about the depth and nature of their cross-lingual capabilities.
no code implementations • 18 Feb 2024 • Shuzhou Yuan, Ercong Nie, Michael Färber, Helmut Schmid, Hinrich Schütze
Large Language Models (LLMs) exhibit strong In-Context Learning (ICL) capabilities when prompts with demonstrations are applied to them.
no code implementations • 18 Feb 2024 • Shuzhou Yuan, Ercong Nie, Bolei Ma, Michael Färber
Large Language Models (LLMs) possess outstanding capabilities in addressing various natural language processing (NLP) tasks.
1 code implementation • 29 Jan 2024 • Bolei Ma, Ercong Nie, Shuzhou Yuan, Helmut Schmid, Michael Färber, Frauke Kreuter, Hinrich Schütze
However, most previous studies primarily focused on sentence-level classification tasks, and only a few considered token-level labeling tasks such as Named Entity Recognition (NER) and Part-of-Speech (POS) tagging.
no code implementations • 11 Nov 2023 • Xiaoqian Li, Ercong Nie, Sheng Liang
The remarkable ability of Large Language Models (LLMs) to understand and follow instructions has sometimes been limited by their in-context learning (ICL) performance in low-resource languages.
no code implementations • 1 Nov 2023 • Xiaoqian Li, Ercong Nie, Sheng Liang
The promise of Large Language Models (LLMs) in Natural Language Processing has often been overshadowed by their limited performance in low-resource languages such as Bangla.
1 code implementation • 8 Oct 2023 • Ercong Nie, Helmut Schmid, Hinrich Schütze
Pretrained multilingual encoder models can directly perform zero-shot multilingual tasks or linguistic probing by reformulating the input examples into cloze-style prompts.
no code implementations • 9 Aug 2023 • Ercong Nie, Helmut Schmid, Hinrich Schütze
However, training an automatic syntactic analysis system for ancient languages solely relying on annotated parse data is a formidable task due to the inherent challenges in building treebanks for such languages.
1 code implementation • 3 Aug 2023 • Zheyu Zhang, Han Yang, Bolei Ma, David Rügamer, Ercong Nie
Large Language Models (LLMs) demonstrate remarkable performance on a variety of natural language understanding (NLU) tasks, primarily due to their in-context learning ability.
1 code implementation • 15 Jul 2023 • Bolei Ma, Ercong Nie, Helmut Schmid, Hinrich Schütze
We conduct comprehensive experiments on diverse cross-lingual language understanding tasks (sentiment classification, paraphrase identification, and natural language inference) and empirically analyze the variation trends of prompt-based finetuning performance in cross-lingual transfer across different few-shot and full-data settings.
Natural Language Inference Natural Language Understanding +4
1 code implementation • 19 Dec 2022 • Ercong Nie, Sheng Liang, Helmut Schmid, Hinrich Schütze
Multilingual Pretrained Language Models (MPLMs) have shown their strong multilinguality in recent empirical cross-lingual transfer studies.
1 code implementation • 24 Oct 2022 • Ingo Ziegler, Bolei Ma, Ercong Nie, Bernd Bischl, David Rügamer, Benjamin Schubert, Emilio Dorigatti
While direct identification of proteasomal cleavage \emph{in vitro} is cumbersome and low throughput, it is possible to implicitly infer cleavage events from the termini of MHC-presented epitopes, which can be detected in large amounts thanks to recent advances in high-throughput MHC ligandomics.