Search Results for author: Yilin Shen

Found 40 papers, 5 papers with code

Compositional Generalization in Spoken Language Understanding

no code implementations25 Dec 2023 Avik Ray, Yilin Shen, Hongxia Jin

State-of-the-art spoken language understanding (SLU) models have shown tremendous success in benchmark SLU datasets, yet they still fail in many practical scenario due to the lack of model compositionality when trained on limited training data.

Spoken Language Understanding

Token Fusion: Bridging the Gap between Token Pruning and Token Merging

no code implementations2 Dec 2023 Minchul Kim, Shangqian Gao, Yen-Chang Hsu, Yilin Shen, Hongxia Jin

In this paper, we introduce "Token Fusion" (ToFu), a method that amalgamates the benefits of both token pruning and token merging.

Computational Efficiency Image Generation

Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters

no code implementations30 Nov 2023 James Seale Smith, Yen-Chang Hsu, Zsolt Kira, Yilin Shen, Hongxia Jin

We show that STAMINA outperforms the prior SOTA for the setting of text-to-image continual customization on a 50-concept benchmark composed of landmarks and human faces, with no stored replay data.

Continual Learning Hard Attention +1

Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA

no code implementations12 Apr 2023 James Seale Smith, Yen-Chang Hsu, Lingyu Zhang, Ting Hua, Zsolt Kira, Yilin Shen, Hongxia Jin

We show that C-LoRA not only outperforms several baselines for our proposed setting of text-to-image continual customization, which we refer to as Continual Diffusion, but that we achieve a new state-of-the-art in the well-established rehearsal-free continual learning setting for image classification.

Continual Learning Image Classification

To Wake-up or Not to Wake-up: Reducing Keyword False Alarm by Successive Refinement

no code implementations6 Apr 2023 Yashas Malur Saidutta, Rakshith Sharma Srinivasa, Ching-Hua Lee, Chouchang Yang, Yilin Shen, Hongxia Jin

We show that existing deep keyword spotting mechanisms can be improved by Successive Refinement, where the system first classifies whether the input audio is speech or not, followed by whether the input is keyword-like or not, and finally classifies which keyword was uttered.

Keyword Spotting

ESC: Exploration with Soft Commonsense Constraints for Zero-shot Object Navigation

no code implementations30 Jan 2023 Kaiwen Zhou, Kaizhi Zheng, Connor Pryor, Yilin Shen, Hongxia Jin, Lise Getoor, Xin Eric Wang

Such object navigation tasks usually require large-scale training in visual environments with labeled objects, which generalizes poorly to novel objects in unknown environments.

Efficient Exploration Language Modelling +2

GOHSP: A Unified Framework of Graph and Optimization-based Heterogeneous Structured Pruning for Vision Transformer

no code implementations13 Jan 2023 Miao Yin, Burak Uzkent, Yilin Shen, Hongxia Jin, Bo Yuan

We first develop a graph-based ranking for measuring the importance of attention heads, and the extracted importance information is further integrated to an optimization-based procedure to impose the heterogeneous structured sparsity patterns on the ViT models.

Numerical Optimizations for Weighted Low-rank Estimation on Language Model

no code implementations2 Nov 2022 Ting Hua, Yen-Chang Hsu, Felicity Wang, Qian Lou, Yilin Shen, Hongxia Jin

However, standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.

Language Modelling

A Closer Look at Knowledge Distillation with Features, Logits, and Gradients

no code implementations18 Mar 2022 Yen-Chang Hsu, James Smith, Yilin Shen, Zsolt Kira, Hongxia Jin

Knowledge distillation (KD) is a substantial strategy for transferring learned knowledge from one neural network model to another.

Incremental Learning Knowledge Distillation +2

MGA-VQA: Multi-Granularity Alignment for Visual Question Answering

no code implementations25 Jan 2022 Peixi Xiong, Yilin Shen, Hongxia Jin

In contrast to previous works, our model splits alignment into different levels to achieve learning better correlations without needing additional data and annotations.

Question Answering Visual Question Answering

Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding

no code implementations NAACL 2021 Ting Hua, Yilin Shen, Changsheng Zhao, Yen-Chang Hsu, Hongxia Jin

Most existing continual learning approaches suffer from low accuracy and performance fluctuation, especially when the distributions of old and new data are significantly different.

Continual Learning domain classification +1

Lite-MDETR: A Lightweight Multi-Modal Detector

no code implementations CVPR 2022 Qian Lou, Yen-Chang Hsu, Burak Uzkent, Ting Hua, Yilin Shen, Hongxia Jin

The key primitive is that Dictionary-Lookup-Transformormations (DLT) is proposed to replace Linear Transformation (LT) in multi-modal detectors where each weight in Linear Transformation (LT) is approximately factorized into a smaller dictionary, index, and coefficient.

object-detection Object Detection +3

Automatic Mixed-Precision Quantization Search of BERT

no code implementations30 Dec 2021 Changsheng Zhao, Ting Hua, Yilin Shen, Qian Lou, Hongxia Jin

Knowledge distillation, Weight pruning, and Quantization are known to be the main directions in model compression.

Knowledge Distillation Model Compression +2

Exploring Covariate and Concept Shift for Detection and Calibration of Out-of-Distribution Data

no code implementations28 Oct 2021 Junjiao Tian, Yen-Change Hsu, Yilin Shen, Hongxia Jin, Zsolt Kira

We are the first to propose a method that works well across both OOD detection and calibration and under different types of shifts.

Out of Distribution (OOD) Detection

DictFormer: Tiny Transformer with Shared Dictionary

no code implementations ICLR 2022 Qian Lou, Ting Hua, Yen-Chang Hsu, Yilin Shen, Hongxia Jin

DictFormer significantly reduces the redundancy in the transformer's parameters by replacing the prior transformer's parameters with compact, shared dictionary, a few unshared coefficients, and indices.

Abstractive Text Summarization Language Modelling +2

Exploring Covariate and Concept Shift for Detection and Confidence Calibration of Out-of-Distribution Data

no code implementations29 Sep 2021 Junjiao Tian, Yen-Chang Hsu, Yilin Shen, Hongxia Jin, Zsolt Kira

To this end, we theoretically derive two score functions for OOD detection, the covariate shift score and concept shift score, based on the decomposition of KL-divergence for both scores, and propose a geometrically-inspired method (Geometric ODIN) to improve OOD detection under both shifts with only in-distribution data.

Out of Distribution (OOD) Detection

An Adversarial Learning based Multi-Step Spoken Language Understanding System through Human-Computer Interaction

no code implementations6 Jun 2021 Yu Wang, Yilin Shen, Hongxia Jin

In this paper, we introduce a novel multi-step spoken language understanding system based on adversarial learning that can leverage the multiround user's feedback to update slot values.

Dialogue State Tracking Semantic Frame Parsing +2

SAFENet: A Secure, Accurate and Fast Neural Network Inference

no code implementations ICLR 2021 Qian Lou, Yilin Shen, Hongxia Jin, Lei Jiang

A cryptographic neural network inference service is an efficient way to allow two parties to execute neural network inference without revealing either party’s data or model.

Modeling Token-level Uncertainty to Learn Unknown Concepts in SLU via Calibrated Dirichlet Prior RNN

no code implementations16 Oct 2020 Yilin Shen, Wenhu Chen, Hongxia Jin

We design a Dirichlet Prior RNN to model high-order uncertainty by degenerating as softmax layer for RNN model training.

slot-filling Slot Filling +1

Generating Dialogue Responses from a Semantic Latent Space

no code implementations EMNLP 2020 Wei-Jen Ko, Avik Ray, Yilin Shen, Hongxia Jin

Existing open-domain dialogue generation models are usually trained to mimic the gold response in the training set using cross-entropy loss on the vocabulary.

Dialogue Generation valid

Reward Constrained Interactive Recommendation with Natural Language Feedback

no code implementations4 May 2020 Ruiyi Zhang, Tong Yu, Yilin Shen, Hongxia Jin, Changyou Chen, Lawrence Carin

Text-based interactive recommendation provides richer user feedback and has demonstrated advantages over traditional interactive recommender systems.

Recommendation Systems reinforcement-learning +2

PGLP: Customizable and Rigorous Location Privacy through Policy Graph

3 code implementations4 May 2020 Yang Cao, Yonghui Xiao, Shun Takagi, Li Xiong, Masatoshi Yoshikawa, Yilin Shen, Jinfei Liu, Hongxia Jin, Xiaofeng Xu

Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.

Cryptography and Security Computers and Society

Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution Data

2 code implementations CVPR 2020 Yen-Chang Hsu, Yilin Shen, Hongxia Jin, Zsolt Kira

Deep neural networks have attained remarkable performance when applied to data that comes from the same distribution as that of the training set, but can significantly degrade otherwise.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Text-Based Interactive Recommendation via Constraint-Augmented Reinforcement Learning

no code implementations NeurIPS 2019 Ruiyi Zhang, Tong Yu, Yilin Shen, Hongxia Jin, Changyou Chen

Text-based interactive recommendation provides richer user preferences and has demonstrated advantages over traditional interactive recommender systems.

Recommendation Systems reinforcement-learning +2

A Progressive Model to Enable Continual Learning for Semantic Slot Filling

no code implementations IJCNLP 2019 Yilin Shen, Xiangyu Zeng, Hongxia Jin

ProgModel consists of a novel context gate that transfers previously learned knowledge to a small size expanded component; and meanwhile enables this new component to be fast trained to learn from new data.

Continual Learning slot-filling +2

Fast Domain Adaptation of Semantic Parsers via Paraphrase Attention

no code implementations WS 2019 Avik Ray, Yilin Shen, Hongxia Jin

However, state-of-the art attention based neural parsers are slow to retrain which inhibits real time domain adaptation.

Domain Adaptation

Iterative Delexicalization for Improved Spoken Language Understanding

no code implementations15 Oct 2019 Avik Ray, Yilin Shen, Hongxia Jin

Recurrent neural network (RNN) based joint intent classification and slot tagging models have achieved tremendous success in recent years for building spoken language understanding and dialog systems.

intent-classification Intent Classification +1

SkillBot: Towards Automatic Skill Development via User Demonstration

no code implementations NAACL 2019 Yilin Shen, Avik Ray, Hongxia Jin, S Nama, eep

We present SkillBot that takes the first step to enable end users to teach new skills in personal assistants (PA).

Natural Language Understanding

Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded

no code implementations ICCV 2019 Ramprasaath R. Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry Heck, Dhruv Batra, Devi Parikh

Many vision and language models suffer from poor visual grounding - often falling back on easy-to-learn language priors rather than basing their decisions on visual concepts in the image.

Image Captioning Question Answering +2

A Bi-model based RNN Semantic Frame Parsing Model for Intent Detection and Slot Filling

1 code implementation NAACL 2018 Yu Wang, Yilin Shen, Hongxia Jin

The most effective algorithms are based on the structures of sequence to sequence models (or "encoder-decoder" models), and generate the intents and semantic tags either using separate models or a joint model.

Intent Detection +4

A Variational Dirichlet Framework for Out-of-Distribution Detection

no code implementations ICLR 2019 Wenhu Chen, Yilin Shen, Hongxia Jin, William Wang

With the recently rapid development in deep learning, deep neural networks have been widely adopted in many real-life applications.

Out-of-Distribution Detection Variational Inference

User Information Augmented Semantic Frame Parsing using Coarse-to-Fine Neural Networks

no code implementations18 Sep 2018 Yilin Shen, Xiangyu Zeng, Yu Wang, Hongxia Jin

The results show that our approach leverages such simple user information to outperform state-of-the-art approaches by 0. 25% for intent detection and 0. 31% for slot filling using standard training data.

Intent Detection Semantic Frame Parsing +3

Robust Spoken Language Understanding via Paraphrasing

no code implementations17 Sep 2018 Avik Ray, Yilin Shen, Hongxia Jin

Learning intents and slot labels from user utterances is a fundamental step in all spoken language understanding (SLU) and dialog systems.

Spoken Language Understanding

CRUISE: Cold-Start New Skill Development via Iterative Utterance Generation

no code implementations ACL 2018 Yilin Shen, Avik Ray, Abhishek Patel, Hongxia Jin

We present a system, CRUISE, that guides ordinary software developers to build a high quality natural language understanding (NLU) engine from scratch.

Natural Language Understanding

Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning

no code implementations22 Jun 2018 Xinlei Pan, Eshed Ohn-Bar, Nicholas Rhinehart, Yan Xu, Yilin Shen, Kris M. Kitani

The learning process is interactive, with a human expert first providing input in the form of full demonstrations along with some subgoal states.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.