no code implementations • EANCS 2021 • Yi Huang, Junlan Feng, Xiaoting Wu, Xiaoyu Du
Our findings are: the performance variance of generative DSTs is not only due to the model structure itself, but can be attributed to the distribution of cross-domain values.
no code implementations • 5 Mar 2024 • Ce Chi, Xing Wang, Kexin Yang, Zhiyan Song, Di Jin, Lin Zhu, Chao Deng, Junlan Feng
A channel identifier, a global mixing module and a self-contextual attention module are devised in InjectTST.
no code implementations • 20 Feb 2024 • Yanan Chen, Zihao Cui, Yingying Gao, Junlan Feng, Chao Deng, Shilei Zhang
In this study, we present a novel weighting prediction approach, which explicitly learns the task relationships from downstream training information to address the core challenge of universal speech enhancement.
no code implementations • 1 Jan 2024 • Ruizhuo Xu, Ke Wang, Chao Deng, Mei Wang, Xi Chen, Wenhui Huang, Junlan Feng, Weihong Deng
With the increasing availability of consumer depth sensors, 3D face recognition (FR) has attracted more and more attention.
1 code implementation • 17 Nov 2023 • Shenghao Yang, Chenyang Wang, Yankai Liu, Kangping Xu, Weizhi Ma, Yiqun Liu, Min Zhang, Haitao Zeng, Junlan Feng, Chao Deng
In this paper, we propose CoWPiRec, an approach of Collaborative Word-based Pre-trained item representation for Recommendation.
1 code implementation • 17 Nov 2023 • Hong Liu, Yucheng Cai, Yuan Zhou, Zhijian Ou, Yi Huang, Junlan Feng
Inspired by the recently emerging prompt tuning method that performs well on dialog systems, we propose to use the prompt pool method, where we maintain a pool of key-value paired prompts and select prompts from the pool according to the distance between the dialog history and the prompt keys.
no code implementations • 23 Oct 2023 • Yingying Gao, Shilei Zhang, Zihao Cui, Chao Deng, Junlan Feng
Cascading multiple pre-trained models is an effective way to compose an end-to-end system.
no code implementations • 20 Oct 2023 • Yingying Gao, Shilei Zhang, Zihao Cui, Yanhan Xu, Chao Deng, Junlan Feng
Self-supervised pre-trained models such as HuBERT and WavLM leverage unlabeled speech data for representation learning and offer significantly improve for numerous downstream tasks.
1 code implementation • 1 Sep 2023 • Yifan Pu, Yizeng Han, Yulin Wang, Junlan Feng, Chao Deng, Gao Huang
Since images belonging to the same meta-category usually share similar visual appearances, mining discriminative visual cues is the key to distinguishing fine-grained categories.
1 code implementation • ICCV 2023 • Yizeng Han, Dongchen Han, Zeyu Liu, Yulin Wang, Xuran Pan, Yifan Pu, Chao Deng, Junlan Feng, Shiji Song, Gao Huang
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
no code implementations • 12 Jun 2023 • Xing Wang, Zhendong Wang, Kexin Yang, Junlan Feng, Zhiyan Song, Chao Deng, Lin Zhu
To capture the intrinsic patterns of time series, we propose a novel deep learning network architecture, named Multi-resolution Periodic Pattern Network (MPPN), for long-term series forecasting.
1 code implementation • 25 May 2023 • Zi Liang, Pinghui Wang, Ruofei Zhang, Shuo Zhang, Xiaofan Ye Yi Huang, Junlan Feng
Recent years have seen increasing concerns about the unsafe response generation of large-scale dialogue systems, where agents will learn offensive or biased behaviors from the real-world corpus.
1 code implementation • 22 May 2023 • Yucheng Cai, Hong Liu, Zhijian Ou, Yi Huang, Junlan Feng
Most existing task-oriented dialog (TOD) systems track dialog states in terms of slots and values and use them to query a database to get relevant knowledge to generate responses.
no code implementations • 9 Mar 2023 • Jie Liu, Yixuan Liu, Xue Han, Chao Deng, Junlan Feng
Previous contrastive learning methods for sentence representations often focus on insensitive transformations to produce positive pairs, but neglect the role of sensitive transformations that are harmful to semantic representations.
1 code implementation • 28 Feb 2023 • Xing Wang, Kexin Yang, Zhendong Wang, Junlan Feng, Lin Zhu, Juan Zhao, Chao Deng
First, we apply adaptive hybrid graph learning to learn the compound spatial correlations among cell towers.
no code implementations • 27 Feb 2023 • Shuo Zhang, Junzhou Zhao, Pinghui Wang, Tianxiang Wang, Zi Liang, Jing Tao, Yi Huang, Junlan Feng
To cope with this problem, we explore to improve multi-action dialog policy learning with explicit and implicit turn-level user feedback received for historical predictions (i. e., logged user feedback) that are cost-efficient to collect and faithful to real-world scenarios.
no code implementations • 26 Jan 2023 • Runze Lei, Pinghui Wang, Junzhou Zhao, Lin Lan, Jing Tao, Chao Deng, Junlan Feng, Xidian Wang, Xiaohong Guan
In this work, we propose a novel FL framework for graph data, FedCog, to efficiently handle coupled graphs that are a kind of distributed graph data, but widely exist in a variety of real-world applications such as mobile carriers' communication networks and banks' transaction networks.
1 code implementation • 17 Oct 2022 • Hong Liu, Yucheng Cai, Zhijian Ou, Yi Huang, Junlan Feng
Second, an important ingredient in a US is that the user goal can be effectively incorporated and tracked; but how to flexibly integrate goal state tracking and develop an end-to-end trainable US for multi-domains has remained to be a challenge.
no code implementations • 13 Oct 2022 • Hong Liu, Zhijian Ou, Yi Huang, Junlan Feng
Recently, there has been progress in supervised funetuning pretrained GPT-2 to build end-to-end task-oriented dialog (TOD) systems.
1 code implementation • 27 Sep 2022 • Hong Liu, Hao Peng, Zhijian Ou, Juanzi Li, Yi Huang, Junlan Feng
Recently, there have merged a class of task-oriented dialogue (TOD) datasets collected through Wizard-of-Oz simulated games.
1 code implementation • COLING 2022 • Yutao Mou, Keqing He, Yanan Wu, Pei Wang, Jingang Wang, Wei Wu, Yi Huang, Junlan Feng, Weiran Xu
Traditional intent classification models are based on a pre-defined intent set and only recognize limited in-domain (IND) intent classes.
no code implementations • COLING 2022 • Guanting Dong, Daichi Guo, LiWen Wang, Xuefeng Li, Zechen Wang, Chen Zeng, Keqing He, Jinzheng Zhao, Hao Lei, Xinyue Cui, Yi Huang, Junlan Feng, Weiran Xu
Most existing slot filling models tend to memorize inherent patterns of entities and corresponding contexts from training data.
1 code implementation • SIGDIAL (ACL) 2022 • Yucheng Cai, Hong Liu, Zhijian Ou, Yi Huang, Junlan Feng
In this paper, we propose to apply JSA to semi-supervised learning of the latent state TOD models, which is referred to as JSA-TOD.
1 code implementation • 6 Jul 2022 • Zhijian Ou, Junlan Feng, Juanzi Li, Yakun Li, Hong Liu, Hao Peng, Yi Huang, Jiangjiang Zhao
A challenge on Semi-Supervised and Reinforced Task-Oriented Dialog Systems, Co-located with EMNLP2022 SereTOD Workshop.
no code implementations • 26 Jun 2022 • Yingying Gao, Junlan Feng, Chao Deng, Shilei Zhang
Spoken language understanding (SLU) treats automatic speech recognition (ASR) and natural language understanding (NLU) as a unified task and usually suffers from data scarcity.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 16 Jun 2022 • Yingying Gao, Junlan Feng, Tianrui Wang, Chao Deng, Shilei Zhang
Analysis shows that our proposed approach brings a better uniformity for the trained model and enlarges the CTC spikes obviously.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • 6 Jun 2022 • Pei Ke, Haozhe Ji, Zhenyu Yang, Yi Huang, Junlan Feng, Xiaoyan Zhu, Minlie Huang
Despite the success of text-to-text pre-trained models in various natural language generation (NLG) tasks, the generation performance is largely restricted by the number of labeled data in downstream tasks, particularly in data-to-text generation tasks.
1 code implementation • 25 Apr 2022 • Shuo Zhang, Junzhou Zhao, Pinghui Wang, Yu Li, Yi Huang, Junlan Feng
Multi-action dialog policy (MADP), which generates multiple atomic dialog actions per turn, has been widely applied in task-oriented dialog systems to provide expressive and efficient system responses.
no code implementations • 19 Apr 2022 • Zhuoran Li, Xing Wang, Ling Pan, Lin Zhu, Zhendong Wang, Junlan Feng, Chao Deng, Longbo Huang
A2C-GS consists of three novel components, including a verifier to validate the correctness of a generated network topology, a graph neural network (GNN) to efficiently approximate topology rating, and a DRL actor layer to conduct a topology search.
2 code implementations • 13 Apr 2022 • Hong Liu, Yucheng Cai, Zhijian Ou, Yi Huang, Junlan Feng
Recently, Transformer based pretrained language models (PLMs), such as GPT2 and T5, have been leveraged to build generative task-oriented dialog (TOD) systems.
no code implementations • 1 Apr 2022 • Tianrui Wang, Weibin Zhu, Yingying Gao, Junlan Feng, Shilei Zhang
Joint training of speech enhancement model (SE) and speech recognition model (ASR) is a common solution for robust ASR in noisy environments.
no code implementations • 25 Feb 2022 • Tianrui Wang, Weibin Zhu, Yingying Gao, Yanan Chen, Junlan Feng, Shilei Zhang
Therefore, we previously proposed a harmonic gated compensation network (HGCN) to predict the full harmonic locations based on the unmasked harmonics and process the result of a coarse enhancement module to recover the masked harmonics.
1 code implementation • 30 Jan 2022 • Tianrui Wang, Weibin Zhu, Yingying Gao, Junlan Feng, Shilei Zhang
Mask processing in the time-frequency (T-F) domain through the neural network has been one of the mainstreams for single-channel speech enhancement.
no code implementations • 1 Nov 2021 • Xing Wang, Juan Zhao, Lin Zhu, Xu Zhou, Zhao Li, Junlan Feng, Chao Deng, Yong Zhang
AMF-STGCN extends GCN by (1) jointly modeling the complex spatial-temporal dependencies in mobile networks, (2) applying attention mechanisms to capture various Receptive Fields of heterogeneous base stations, and (3) introducing an extra decoder based on a fully connected deep network to conquer the error propagation challenge with multi-step forecasting.
2 code implementations • 9 Sep 2021 • Hong Liu, Yucheng Cai, Zhenru Lin, Zhijian Ou, Yi Huang, Junlan Feng
In this paper, we propose Variational Latent-State GPT model (VLS-GPT), which is the first to combine the strengths of the two approaches.
no code implementations • 1 Jan 2021 • Xiaolei Hua, Su Wang, Lin Zhu, Dong Zhou, Junlan Feng, Yiting Wang, Chao Deng, Shuo Wang, Mingtao Mei
However, due to complex correlations and various temporal patterns of large-scale multivariate time series, a general unsupervised anomaly detection model with higher F1-score and Timeliness remains a challenging task.
no code implementations • 1 Jan 2021 • Xing Wang, Lin Zhu, Juan Zhao, Zhou Xu, Zhao Li, Junlan Feng, Chao Deng
Spatial-temporal data forecasting is of great importance for industries such as telecom network operation and transportation management.
1 code implementation • 15 Dec 2020 • Shuo Zhang, Junzhou Zhao, Pinghui Wang, Nuo Xu, Yang Yang, Yiting Liu, Yi Huang, Junlan Feng
This will result in the issue of contract inconsistencies, which may severely impair the legal validity of the contract.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Yi Huang, Junlan Feng, Shuo Ma, Xiaoyu Du, Xiaoting Wu
In this paper, we propose a meta-learning based semi-supervised explicit dialogue state tracker (SEDST) for neural dialogue generation, denoted as MEDST.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Fanyu Meng, Junlan Feng, Danping Yin, Si Chen, Min Hu
Syntactic information is essential for both sentiment analysis(SA) and aspect-based sentiment analysis(ABSA).
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2
1 code implementation • EMNLP 2020 • Yichi Zhang, Zhijian Ou, Huixin Wang, Junlan Feng
In this paper we aim at alleviating the reliance on belief state labels in building end-to-end dialog systems, by leveraging unlabeled dialog data towards semi-supervised learning.
Ranked #2 on End-To-End Dialogue Modelling on MULTIWOZ 2.1
no code implementations • ACL 2020 • Yi Huang, Junlan Feng, Min Hu, Xiaoting Wu, Xiaoyu Du, Shuo Ma
The state-of-the-art accuracy for DST is below 50{\%} for a multi-domain dialogue task.
no code implementations • 4 Nov 2018 • Kai Hu, Zhijian Ou, Min Hu, Junlan Feng
Conditional random fields (CRFs) have been shown to be one of the most successful approaches to sequence labeling.
no code implementations • 4 Nov 2018 • Yinpei Dai, Yichi Zhang, Zhijian Ou, Yanmeng Wang, Junlan Feng
Second, the one-hot encoding of slot labels ignores the semantic meanings and relations for slots, which are implicit in their natural language descriptions.