Search Results for author: Liqun Shao

Found 6 papers, 1 papers with code

Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation

no code implementations WS 2020 Liqun Shao, Sahitya Mantravadi, Tom Manzini, Alejandro Buendia, Manon Knoertzer, Soundar Srinivasan, Chris Quirk

In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models.

Language Modelling

Griffon: Reasoning about Job Anomalies with Unlabeled Data in Cloud-based Platforms

no code implementations23 Aug 2019 Liqun Shao, Yiwen Zhu, Abhiram Eswaran, Kristin Lieber, Janhavi Mahajan, Minsoo Thigpen, Sudhir Darbha, SiQi Liu, Subru Krishnan, Soundar Srinivasan, Carlo Curino, Konstantinos Karanasos

In contrast, in Griffin we cast the problem to a corresponding regression one that predicts the runtime of a job, and show how the relative contributions of the features used to train our interpretable model can be exploited to rank the potential causes of job slowdowns.

Time Series Analysis

Generating an Overview Report over Many Documents

no code implementations17 Aug 2019 Jingwen Wang, Hao Zhang, Cheng Zhang, Wenjing Yang, Liqun Shao, Jie Wang

To overcome this obstacle, we present NDORGS (Numerous Documents' Overview Report Generation Scheme) that integrates text filtering, keyword scoring, single-document summarization (SDS), topic modeling, MDS, and title generation to generate a coherent, well-structured ORPT.

Attribute Decision Making +2

DTATG: An Automatic Title Generator based on Dependency Trees

no code implementations1 Oct 2017 Liqun Shao, Jie Wang

We study automatic title generation for a given block of text and present a method called DTATG to generate titles.

Sentence

Efficient and Effective Single-Document Summarizations and A Word-Embedding Measurement of Quality

no code implementations1 Oct 2017 Liqun Shao, Hao Zhang, Ming Jia, Jie Wang

We show that the orderings of the ROUGE and WESM scores of our algorithms are highly comparable, suggesting that WESM may serve as a viable alternative for measuring the quality of a summary.

Clustering Keyword Extraction

Cannot find the paper you are looking for? You can Submit a new open access paper.