1 code implementation • 12 Oct 2023 • Jinye Yang, Ji Xu, Di wu, Jianhang Tang, Shaobo Li, Guoyin Wang
The deviation of a classification model is caused by both class-wise and attribute-wise imbalance.
no code implementations • 12 Jun 2023 • Ji Xu, Yuan Xie, Wenchao Wang
Underwater acoustic target recognition is a challenging task owing to the intricate underwater environments and limited data availability.
no code implementations • 31 May 2023 • Yuan Xie, Jiawei Ren, Ji Xu
In our work, we propose to implement Underwater Acoustic Recognition based on Templates made up of rich relevant information (hereinafter called "UART").
no code implementations • 31 May 2023 • Yuan Xie, Jiawei Ren, Ji Xu
Background noise and variable channel transmission environment make it complicated to implement accurate ship-radiated noise recognition.
no code implementations • 24 Apr 2023 • Yuan Xie, Tianyu Chen, Ji Xu
Underwater acoustic recognition for ship-radiated signals has high practical application value due to the ability to recognize non-line-of-sight targets.
no code implementations • 11 Jan 2023 • Yao Xiao, Ji Xu, Jing Yang, Shaobo Li
Graph Convolutional Networks (GCNs) have been proved successful in the field of semi-supervised node classification by extracting structural information from graph data.
1 code implementation • 17 Aug 2022 • Ji Xu, Gang Ren, Yao Xiao, Shaobo Li, Guoyin Wang
Optimal leading forest (OLF) has been observed to have the advantage of revealing the difference evolution along a path within a subtree.
no code implementations • 31 Mar 2022 • Zehui Yang, Yifan Chen, Lei Luo, Runyan Yang, Lingxuan Ye, Gaofeng Cheng, Ji Xu, Yaohui Jin, Qingqing Zhang, Pengyuan Zhang, Lei Xie, Yonghong Yan
As a Mandarin speech dataset designed for dialog scenarios with high quality and rich annotations, MagicData-RAMC enriches the data diversity in the Mandarin speech community and allows extensive research on a series of speech-related tasks, including automatic speech recognition, speaker diarization, topic detection, keyword search, text-to-speech, etc.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
1 code implementation • 22 Feb 2022 • Keqi Deng, Songjun Cao, Yike Zhang, Long Ma, Gaofeng Cheng, Ji Xu, Pengyuan Zhang
Recently, end-to-end automatic speech recognition models based on connectionist temporal classification (CTC) have achieved impressive results, especially when fine-tuned from wav2vec2. 0 models.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • NeurIPS 2021 • Junjie Ma, Ji Xu, Arian Maleki
We consider an inverse problem $\mathbf{y}= f(\mathbf{Ax})$, where $\mathbf{x}\in\mathbb{R}^n$ is the signal of interest, $\mathbf{A}$ is the sensing matrix, $f$ is a nonlinear function and $\mathbf{y} \in \mathbb{R}^m$ is the measurement vector.
no code implementations • NeurIPS 2021 • Junjie Ma, Ji Xu, Arian Maleki
We define a notion for the spikiness of the spectrum of $\mathbf{A}$ and show the importance of this measure in the performance of the EP.
no code implementations • 29 Jan 2021 • De-Min Li, Xi-Ruo Zhang, Ye Xing, Ji Xu
In this work, we analyze the four-body weak decays of doubly heavy baryons $\Xi_{cc}^{++}, \Xi_{cc}^+$, and $\Omega_{cc}^+$.
High Energy Physics - Phenomenology
no code implementations • 22 Sep 2020 • Daniel Hsu, Vidya Muthukumar, Ji Xu
The support vector machine (SVM) is a well-established classification method whose name refers to the particular training examples, called support vectors, that determine the maximum margin separating hyperplane.
no code implementations • ICLR 2021 • Shun-ichi Amari, Jimmy Ba, Roger Grosse, Xuechen Li, Atsushi Nitanda, Taiji Suzuki, Denny Wu, Ji Xu
While second order optimizers such as natural gradient descent (NGD) often speed up optimization, their effect on generalization has been called into question.
no code implementations • NeurIPS 2020 • Denny Wu, Ji Xu
Finally, we determine the optimal weighting matrix $\mathbf{\Sigma}_w$ for both the ridgeless ($\lambda\to 0$) and optimally regularized ($\lambda = \lambda_{\rm opt}$) case, and demonstrate the advantage of the weighted objective over standard ridge regression and PCR.
no code implementations • 19 Jun 2019 • Jingyu Yang, Ji Xu, Kun Li, Yu-Kun Lai, Huanjing Yue, Jianzhi Lu, Hao Wu, Yebin Liu
This paper proposes a new method for simultaneous 3D reconstruction and semantic segmentation of indoor scenes.
no code implementations • NeurIPS 2019 • Ji Xu, Daniel Hsu
We study least squares linear regression over $N$ uncorrelated Gaussian features that are selected in order of decreasing variance.
no code implementations • 18 Mar 2019 • Mikhail Belkin, Daniel Hsu, Ji Xu
The "double descent" risk curve was proposed to qualitatively describe the out-of-sample prediction accuracy of variably-parameterized machine learning models.
no code implementations • 5 Feb 2019 • Ji Xu, Arian Maleki, Kamiar Rahnama Rad, Daniel Hsu
This paper studies the problem of risk estimation under the moderately high-dimensional asymptotic setting $n, p \rightarrow \infty$ and $n/p \rightarrow \delta>1$ ($\delta$ is a fixed number), and proves the consistency of three risk estimates that have been successful in numerical studies, i. e., leave-one-out cross validation (LOOCV), approximate leave-one-out (ALO), and approximate message passing (AMP)-based techniques.
no code implementations • NeurIPS 2018 • Ji Xu, Daniel Hsu, Arian Maleki
Expectation Maximization (EM) is among the most popular algorithms for maximum likelihood estimation, but it is generally only guaranteed to find its stationary points of the log-likelihood objective.
no code implementations • ICML 2018 • Junjie Ma, Ji Xu, Arian Maleki
We consider an $\ell_2$-regularized non-convex optimization problem for recovering signals from their noisy phaseless observations.
no code implementations • 25 Sep 2017 • Ji Xu, Guoyin Wang
We propose a sound assumption, arguing that: the neighboring data points are not in peer-to-peer relation, but in a partial-ordered relation induced by the local density and distance between the data; and the label of a center can be regarded as the contribution of its followers.
no code implementations • 17 Aug 2017 • Lu Huang, Jiasong Sun, Ji Xu, Yi Yang
Long Short-Term Memory (LSTM) is the primary recurrent neural networks architecture for acoustic modeling in automatic speech recognition systems.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • Neurocomputing 2017 • Di wu, Mingsheng Shang, Xin Luo a, Ji Xu, Huyong Yan, Weihui Deng, Guoyin Wang
Having a multitude of unlabeled data and few labeled ones is a common problem in many practical ap- plications.
no code implementations • NeurIPS 2016 • Ji Xu, Daniel Hsu, Arian Maleki
Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models.
no code implementations • 12 Jun 2015 • Ji Xu, Guoyin Wang
There are two major advantages with the LT: One is dramatically reducing the running time of assigning noncenter data points to their cluster ID, because the assigning process is turned into just disconnecting the links from each center to its parent.