1 code implementation • 13 Oct 2020 • Pulkit Sharma, Shezan Rohinton Mirzan, Apurva Bhandari, Anish Pimpley, Abhiram Eswaran, Soundar Srinivasan, Liqun Shao
Understanding predictions made by Machine Learning models is critical in many applications.
no code implementations • WS 2020 • Liqun Shao, Sahitya Mantravadi, Tom Manzini, Alejandro Buendia, Manon Knoertzer, Soundar Srinivasan, Chris Quirk
In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models.
no code implementations • 23 Aug 2019 • Liqun Shao, Yiwen Zhu, Abhiram Eswaran, Kristin Lieber, Janhavi Mahajan, Minsoo Thigpen, Sudhir Darbha, SiQi Liu, Subru Krishnan, Soundar Srinivasan, Carlo Curino, Konstantinos Karanasos
In contrast, in Griffin we cast the problem to a corresponding regression one that predicts the runtime of a job, and show how the relative contributions of the features used to train our interpretable model can be exploited to rank the potential causes of job slowdowns.
no code implementations • 17 Aug 2019 • Jingwen Wang, Hao Zhang, Cheng Zhang, Wenjing Yang, Liqun Shao, Jie Wang
To overcome this obstacle, we present NDORGS (Numerous Documents' Overview Report Generation Scheme) that integrates text filtering, keyword scoring, single-document summarization (SDS), topic modeling, MDS, and title generation to generate a coherent, well-structured ORPT.
no code implementations • 1 Oct 2017 • Liqun Shao, Jie Wang
We study automatic title generation for a given block of text and present a method called DTATG to generate titles.
no code implementations • 1 Oct 2017 • Liqun Shao, Hao Zhang, Ming Jia, Jie Wang
We show that the orderings of the ROUGE and WESM scores of our algorithms are highly comparable, suggesting that WESM may serve as a viable alternative for measuring the quality of a summary.