no code implementations • 25 Apr 2024 • Shufan Wang, Guojun Xiong, Shichen Zhang, Huacheng Zeng, Jian Li, Shivendra Panwar
We study the data packet transmission problem (mmDPT) in dense cell-free millimeter wave (mmWave) networks, i. e., users sending data packet requests to access points (APs) via uplinks and APs transmitting requested data packets to users via downlinks.
no code implementations • 16 Dec 2023 • Shufan Wang, Guojun Xiong, Jian Li
Restless multi-armed bandits (RMAB) have been widely used to model sequential decision making problems with constraints.
no code implementations • 24 May 2023 • Shufan Wang, Sebastien Jean, Sailik Sengupta, James Gung, Nikolaos Pappas, Yi Zhang
In executable task-oriented semantic parsing, the system aims to translate users' utterances in natural language to machine-interpretable programs (API calls) that can be executed according to pre-defined API specifications.
no code implementations • 24 May 2023 • Shufan Wang, Yixiao Song, Andrew Drozdov, Aparna Garimella, Varun Manjunatha, Mohit Iyyer
Digging deeper, we find that interpolating with a retrieval distribution actually increases perplexity compared to a baseline Transformer LM for the majority of tokens in the WikiText-103 test set, even though the overall perplexity is lower due to a smaller number of tokens for which perplexity dramatically decreases after interpolation.
1 code implementation • 28 Oct 2022 • Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, Mohit Iyyer
Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs.
Ranked #9 on Language Modelling on WikiText-103
1 code implementation • 7 Oct 2022 • Zhichao Yang, Shufan Wang, Bhanu Pratap Singh Rawat, Avijit Mitra, Hong Yu
Automatic International Classification of Diseases (ICD) coding aims to assign multiple ICD codes to a medical note with average length of 3, 000+ tokens.
Ranked #1 on Medical Code Prediction on MIMIC-III
no code implementations • NAACL 2022 • Shufan Wang, Fangyuan Xu, Laure Thompson, Eunsol Choi, Mohit Iyyer
We show that not only do state-of-the-art LFQA models struggle to generate relevant examples, but also that standard evaluation metrics such as ROUGE are insufficient to judge exemplification quality.
no code implementations • 26 Feb 2022 • Guojun Xiong, Shufan Wang, Jian Li, Rahul Singh
Using this structural result, we establish the indexability of our problem, and employ the Whittle index policy to minimize average latency.
1 code implementation • EMNLP 2021 • Shufan Wang, Laure Thompson, Mohit Iyyer
Phrase representations derived from BERT often do not exhibit complex phrasal compositionality, as the model relies instead on lexical similarity to determine semantic relatedness.
1 code implementation • EMNLP 2020 • Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, Mohit Iyyer
Systems for story generation are asked to produce plausible and enjoyable stories given an input context.
no code implementations • 11 Sep 2020 • Shufan Wang, Ningyi Liao, Liyao Xiang, Nanyang Ye, Quanshi Zhang
Through experiments on a variety of adversarial pruning methods, we find that weights sparsity will not hurt but improve robustness, where both weights inheritance from the lottery ticket and adversarial training improve model robustness in network pruning.
no code implementations • 14 Jun 2020 • Cecilia Ferrando, Shufan Wang, Daniel Sheldon
The goal of this paper is to develop a practical and general-purpose approach to construct confidence intervals for differentially private parametric estimation.
1 code implementation • NeurIPS 2020 • Shufan Wang, Jian Li, Shiqiang Wang
We obtain both deterministic and randomized online algorithms with provably improved performance when either a single or multiple ML predictions are used to make decisions.
no code implementations • NAACL 2019 • Shufan Wang, Mohit Iyyer
Literary critics often attempt to uncover meaning in a single work of literature through careful reading and analysis.