no code implementations • COLING 2022 • Yu Yu, Abdul Rafae Khan, Jia Xu
The quality of Natural Language Processing (NLP) models is typically measured by the accuracy or error rate of a predefined test set.
no code implementations • COLING 2022 • Yu Yu, Shahram Khadivi, Jia Xu
This paper introduces our Diversity Advanced Actor-Critic reinforcement learning (A2C) framework (DAAC) to improve the generalization and accuracy of Natural Language Processing (NLP).
no code implementations • 19 Jan 2024 • Yu Yu, Chao-Han Huck Yang, Tuan Dinh, Sungho Ryu, Jari Kolehmainen, Roger Ren, Denis Filimonov, Prashanth G. Shivakumar, Ankur Gandhe, Ariya Rastow, Jia Xu, Ivan Bulyko, Andreas Stolcke
The use of low-rank adaptation (LoRA) with frozen pretrained language models (PLMs) has become increasing popular as a mainstream, resource-efficient modeling approach for memory-constrained hardware.
no code implementations • 26 Sep 2023 • Yu Yu, Chao-Han Huck Yang, Jari Kolehmainen, Prashanth G. Shivakumar, Yile Gu, Sungho Ryu, Roger Ren, Qi Luo, Aditya Gourav, I-Fan Chen, Yi-Chieh Liu, Tuan Dinh, Ankur Gandhe, Denis Filimonov, Shalini Ghosh, Andreas Stolcke, Ariya Rastow, Ivan Bulyko
We propose a neural language modeling system based on low-rank adaptation (LoRA) for speech recognition output rescoring.
2 code implementations • 13 Feb 2023 • Yongqi Li, Yu Yu, Tieyun Qian
Despite the recent success achieved by several two-stage prototypical networks in few-shot named entity recognition (NER) task, the overdetected false spans at the span detection stage and the inaccurate and unstable prototypes at the type classification stage remain to be challenging problems.
Ranked #2 on Few-shot NER on Few-NERD (INTRA)
no code implementations • 30 Jun 2021 • Nikhil Muralidhar, Sathappah Muthiah, Patrick Butler, Manish Jain, Yu Yu, Katy Burne, Weipeng Li, David Jones, Prakash Arunachalam, Hays 'Skip' McCormick, Naren Ramakrishnan
We describe lessons learned from developing and deploying machine learning models at scale across the enterprise in a range of financial analytics applications.
1 code implementation • 29 Jan 2020 • Zhao Chen, Yu Yu, Xiangkun Liu, Zuhui Fan
We apply the inverse-Gaussianization method proposed in \citealt{arXiv:1607. 05007} to fast produce weak lensing convergence maps and investigate the peak statistics, including the peak height counts and peak steepness counts, in these mocks.
Cosmology and Nongalactic Astrophysics
no code implementations • CVPR 2020 • Yu Yu, Jean-Marc Odobez
Although automatic gaze estimation is very important to a large variety of application areas, it is difficult to train accurate and robust gaze models, in great part due to the difficulty in collecting large and diverse data (annotating 3D gaze is expensive and existing datasets use different setups).
no code implementations • CVPR 2019 • Yu Yu, Gang Liu, Jean-Marc Odobez
In this work, we address the problem of person-specific gaze model adaptation from only a few reference training samples.
no code implementations • 20 Apr 2019 • Gang Liu, Yu Yu, Kenneth A. Funes Mora, Jean-Marc Odobez
Non-invasive gaze estimation methods usually regress gaze directions directly from a single face or eye image.