Search Results for author: Jiuding Sun

Found 7 papers, 4 papers with code

Standardizing the Measurement of Text Diversity: A Tool and a Comparative Analysis of Scores

no code implementations1 Mar 2024 Chantal Shaib, Joe Barrow, Jiuding Sun, Alexa F. Siu, Byron C. Wallace, Ani Nenkova

The applicability of scores extends beyond analysis of generative models; for example, we highlight applications on instruction-tuning datasets and human-produced texts.

Future Lens: Anticipating Subsequent Tokens from a Single Hidden State

no code implementations8 Nov 2023 Koyena Pal, Jiuding Sun, Andrew Yuan, Byron C. Wallace, David Bau

More concretely, in this paper we ask: Given a hidden (internal) representation of a single token at position $t$ in an input, can we reliably anticipate the tokens that will appear at positions $\geq t + 2$?

Evaluating the Zero-shot Robustness of Instruction-tuned Language Models

1 code implementation20 Jun 2023 Jiuding Sun, Chantal Shaib, Byron C. Wallace

To answer the former, we collect a set of 319 instructions manually written by NLP practitioners for over 80 unique tasks included in widely used benchmarks, and we evaluate the variance and average performance of these instructions as compared to instruction phrasings observed during instruction fine-tuning.

Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction

1 code implementation23 May 2023 Ji Qi, Chuchun Zhang, Xiaozhi Wang, Kaisheng Zeng, Jifan Yu, Jinxin Liu, Jiuding Sun, Yuxiang Chen, Lei Hou, Juanzi Li, Bin Xu

In this paper, we present the first benchmark that simulates the evaluation of open information extraction models in the real world, where the syntactic and expressive distributions under the same knowledge meaning may drift variously.

Language Modelling Large Language Model +1

Unveiling the Black Box of PLMs with Semantic Anchors: Towards Interpretable Neural Semantic Parsing

no code implementations4 Oct 2022 Lunyiu Nie, Jiuding Sun, Yanlin Wang, Lun Du, Lei Hou, Juanzi Li, Shi Han, Dongmei Zhang, Jidong Zhai

The recent prevalence of pretrained language models (PLMs) has dramatically shifted the paradigm of semantic parsing, where the mapping from natural language utterances to structured logical forms is now formulated as a Seq2Seq task.

Hallucination Semantic Parsing +1

GraphQ IR: Unifying the Semantic Parsing of Graph Query Languages with One Intermediate Representation

1 code implementation24 May 2022 Lunyiu Nie, Shulin Cao, Jiaxin Shi, Jiuding Sun, Qi Tian, Lei Hou, Juanzi Li, Jidong Zhai

Subject to the huge semantic gap between natural and formal languages, neural semantic parsing is typically bottlenecked by its complexity of dealing with both input semantics and output syntax.

Few-Shot Learning Semantic Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.