Search Results for author: Weibin Zhang

Found 7 papers, 1 papers with code

An inspection technology of inner surface of the fine hole based on machine vision

no code implementations15 Sep 2023 Rongfang He, Weibin Zhang, Guofang Gao

Fine holes are an important structural component of industrial components, and their inner surface quality is closely related to their function. In order to detect the quality of the inner surface of the fine hole, a special optical measurement system was investigated in this paper.

DWFormer: Dynamic Window transFormer for Speech Emotion Recognition

1 code implementation3 Mar 2023 Shuaiqi Chen, Xiaofen Xing, Weibin Zhang, Weidong Chen, Xiangmin Xu

Self-attention mechanism is applied within windows for capturing temporal important information locally in a fine-grained way.

Speech Emotion Recognition

The CORAL++ Algorithm for Unsupervised Domain Adaptation of Speaker Recogntion

no code implementations2 Feb 2022 Rongjin Li, Weibin Zhang, Dongpeng Chen

To alleviate the degradation caused by domain mismatch, we propose a new feature-based unsupervised domain adaptation algorithm.

Speaker Recognition Unsupervised Domain Adaptation

Multi-head Monotonic Chunkwise Attention For Online Speech Recognition

no code implementations1 May 2020 Baiji Liu, Songjun Cao, Sining Sun, Weibin Zhang, Long Ma

Experiments on AISHELL-1 data show that the proposed model, along with the training strategies, improve the character error rate (CER) of MoChA from 8. 96% to 7. 68% on test set.

speech-recognition Speech Recognition

Vehicle Tracking in Wireless Sensor Networks via Deep Reinforcement Learning

no code implementations22 Feb 2020 Jun Li, Zhichao Xing, Weibin Zhang, Yan Lin, Feng Shu

Vehicle tracking has become one of the key applications of wireless sensor networks (WSNs) in the fields of rescue, surveillance, traffic monitoring, etc.

reinforcement-learning Reinforcement Learning (RL)

Essence Knowledge Distillation for Speech Recognition

no code implementations26 Jun 2019 Zhenchuan Yang, Chun Zhang, Weibin Zhang, Jianxiu Jin, Dongpeng Chen

When the student model is trained together with the correct labels and the essence knowledge from the teacher model, it not only significantly outperforms another single model with the same architecture that is trained only with the correct labels, but also consistently outperforms the teacher model that is used to generate the soft labels.

Knowledge Distillation speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.