1 code implementation • COLING 2022 • Haochun Wang, Chi Liu, Nuwa Xi, Sendong Zhao, Meizhi Ju, Shiwei Zhang, Ziheng Zhang, Yefeng Zheng, Bing Qin, Ting Liu
Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain.
1 code implementation • NAACL 2018 • Meizhi Ju, Makoto Miwa, Sophia Ananiadou
Each flat NER layer is based on the state-of-the-art flat NER model that captures sequential context representation with bidirectional Long Short-Term Memory (LSTM) layer and feeds it to the cascaded CRF layer.
Ranked #10 on Nested Mention Recognition on ACE 2005