1 code implementation • 27 Jun 2023 • Liliang Ren, Mankeerat Sidhu, Qi Zeng, Revanth Gangi Reddy, Heng Ji, ChengXiang Zhai
Existing reference-free turn-level evaluation metrics for chatbots inadequately capture the interaction between the user and the system.
1 code implementation • NeurIPS 2023 • Liliang Ren, Yang Liu, Shuohang Wang, Yichong Xu, Chenguang Zhu, ChengXiang Zhai
To validate the effectiveness of SMA on sequence modeling, we design a novel neural architecture, SeqBoat, which employs SMA to sparsely activate a Gated Attention Unit (GAU) based on the state representations learned from an SSM.
Ranked #2 on Long-range modeling on LRA
1 code implementation • 23 Oct 2022 • Liliang Ren, Zixuan Zhang, Han Wang, Clare R. Voss, ChengXiang Zhai, Heng Ji
Modern large-scale Pre-trained Language Models (PLMs) have achieved tremendous success on a wide range of downstream tasks.
Ranked #6 on Few-shot NER on Few-NERD (INTRA) (using extra training data)
1 code implementation • Findings (ACL) 2021 • Liliang Ren, Chenkai Sun, Heng Ji, Julia Hockenmaier
Text-to-Graph extraction aims to automatically extract information graphs consisting of mentions and types from natural language texts.
Ranked #1 on Relation Extraction on ACE 2005 (Sentence Encoder metric)
no code implementations • 24 Mar 2020 • Liliang Ren, Zhuonan Hao
Thus the generalization ability of CNN will be limited since the coordinate information is crucial for a model to learn affine transformations which directly operate on the coordinate of each pixel.
no code implementations • 5 Dec 2019 • Liliang Ren, Gen Sun, Jiaman Wu
Natural gradient has been recently introduced to the field of boosting to enable the generic probabilistic predication capability.
1 code implementation • IJCNLP 2019 • Liliang Ren, Jianmo Ni, Julian McAuley
Experiments on both the multi-domain and the single domain dialogue state tracking dataset show that our model not only scales easily with the increasing number of pre-defined domains and slots but also reaches the state-of-the-art performance.
Ranked #14 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.0
Dialogue State Tracking Multi-domain Dialogue State Tracking
1 code implementation • EMNLP 2018 • Liliang Ren, Kaige Xie, Lu Chen, Kai Yu
Dialogue state tracking is the core part of a spoken dialogue system.
no code implementations • WS 2018 • Kaige Xie, Cheng Chang, Liliang Ren, Lu Chen, Kai Yu
Dialogue state tracking (DST), when formulated as a supervised learning problem, relies on labelled data.
1 code implementation • 4 May 2017 • Liliang Ren
We propose the Recurrent Soft Attention Model, which integrates the visual attention from the original image to a LSTM memory cell through a down-sample network.