Search Results for author: Ercong Nie

Found 13 papers, 6 papers with code

Decomposed Prompting: Unveiling Multilingual Linguistic Structure Knowledge in English-Centric Large Language Models

no code implementations28 Feb 2024 Ercong Nie, Shuzhou Yuan, Bolei Ma, Helmut Schmid, Michael Färber, Frauke Kreuter, Hinrich Schütze

Despite the predominance of English in their training data, English-centric Large Language Models (LLMs) like GPT-3 and LLaMA display a remarkable ability to perform multilingual tasks, raising questions about the depth and nature of their cross-lingual capabilities.

Llama Part-Of-Speech Tagging +1

GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural Network

no code implementations18 Feb 2024 Shuzhou Yuan, Ercong Nie, Michael Färber, Helmut Schmid, Hinrich Schütze

Large Language Models (LLMs) exhibit strong In-Context Learning (ICL) capabilities when prompts with demonstrations are applied to them.

In-Context Learning text-classification +1

Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers

no code implementations18 Feb 2024 Shuzhou Yuan, Ercong Nie, Bolei Ma, Michael Färber

Large Language Models (LLMs) possess outstanding capabilities in addressing various natural language processing (NLP) tasks.

text-classification Text Classification

ToPro: Token-Level Prompt Decomposition for Cross-Lingual Sequence Labeling Tasks

1 code implementation29 Jan 2024 Bolei Ma, Ercong Nie, Shuzhou Yuan, Helmut Schmid, Michael Färber, Frauke Kreuter, Hinrich Schütze

However, most previous studies primarily focused on sentence-level classification tasks, and only a few considered token-level labeling tasks such as Named Entity Recognition (NER) and Part-of-Speech (POS) tagging.

Benchmarking In-Context Learning +8

From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL

no code implementations11 Nov 2023 Xiaoqian Li, Ercong Nie, Sheng Liang

The remarkable ability of Large Language Models (LLMs) to understand and follow instructions has sometimes been limited by their in-context learning (ICL) performance in low-resource languages.

In-Context Learning Retrieval

Crosslingual Retrieval Augmented In-context Learning for Bangla

no code implementations1 Nov 2023 Xiaoqian Li, Ercong Nie, Sheng Liang

The promise of Large Language Models (LLMs) in Natural Language Processing has often been overshadowed by their limited performance in low-resource languages such as Bangla.

In-Context Learning Retrieval

Unleashing the Multilingual Encoder Potential: Boosting Zero-Shot Performance via Probability Calibration

1 code implementation8 Oct 2023 Ercong Nie, Helmut Schmid, Hinrich Schütze

Pretrained multilingual encoder models can directly perform zero-shot multilingual tasks or linguistic probing by reformulating the input examples into cloze-style prompts.

Position

Cross-Lingual Constituency Parsing for Middle High German: A Delexicalized Approach

no code implementations9 Aug 2023 Ercong Nie, Helmut Schmid, Hinrich Schütze

However, training an automatic syntactic analysis system for ancient languages solely relying on annotated parse data is a formidable task due to the inherent challenges in building treebanks for such languages.

Constituency Parsing Cross-Lingual Transfer

Baby's CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models

1 code implementation3 Aug 2023 Zheyu Zhang, Han Yang, Bolei Ma, David Rügamer, Ercong Nie

Large Language Models (LLMs) demonstrate remarkable performance on a variety of natural language understanding (NLU) tasks, primarily due to their in-context learning ability.

GPT-3.5 In-Context Learning +2

Is Prompt-Based Finetuning Always Better than Vanilla Finetuning? Insights from Cross-Lingual Language Understanding

1 code implementation15 Jul 2023 Bolei Ma, Ercong Nie, Helmut Schmid, Hinrich Schütze

We conduct comprehensive experiments on diverse cross-lingual language understanding tasks (sentiment classification, paraphrase identification, and natural language inference) and empirically analyze the variation trends of prompt-based finetuning performance in cross-lingual transfer across different few-shot and full-data settings.

Natural Language Inference Natural Language Understanding +4

Cross-Lingual Retrieval Augmented Prompt for Low-Resource Languages

1 code implementation19 Dec 2022 Ercong Nie, Sheng Liang, Helmut Schmid, Hinrich Schütze

Multilingual Pretrained Language Models (MPLMs) have shown their strong multilinguality in recent empirical cross-lingual transfer studies.

Cross-Lingual Transfer Natural Language Inference +3

What cleaves? Is proteasomal cleavage prediction reaching a ceiling?

1 code implementation24 Oct 2022 Ingo Ziegler, Bolei Ma, Ercong Nie, Bernd Bischl, David Rügamer, Benjamin Schubert, Emilio Dorigatti

While direct identification of proteasomal cleavage \emph{in vitro} is cumbersome and low throughput, it is possible to implicitly infer cleavage events from the termini of MHC-presented epitopes, which can be detected in large amounts thanks to recent advances in high-throughput MHC ligandomics.

Benchmarking Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.