no code implementations • 4 Mar 2024 • Qiao Wang, Ralph Rose, Naho Orita, Ayaka Sugawara
The VocaTT (vocabulary teaching and training) engine is written in Python and comprises three basic steps: pre-processing target word lists, generating sentences and candidate word options using GPT, and finally selecting suitable word options.
no code implementations • 28 Feb 2024 • Qiao Wang, Zheng Yuan
In this study, we evaluated the performance of the state-of-the-art sequence tagging grammar error detection and correction model (SeqTagger) using Japanese university students' writing samples.
no code implementations • 23 Nov 2023 • Xiang Zhang, Qiao Wang
Traditional fair spectral clustering (FSC) methods consist of two consecutive stages, i. e., performing fair spectral embedding on a given graph and conducting $k$means to obtain discrete cluster labels.
no code implementations • 12 Nov 2023 • Hanfeng Cai, Haiyang Liu, Heyang Sun, Qiao Wang
This paper addresses the issue of enhancing the efficiency of a multiple module system connected in parallel during operation and proposes an algorithm based on equal incremental cost for dynamic load allocation.
no code implementations • 17 Jan 2023 • Xiang Zhang, Qiao Wang
We consider the problem of inferring graph topology from smooth graph signals in a novel but practical scenario where data are located in distributed clients and prohibited from leaving local clients due to factors such as privacy concerns.
no code implementations • 1 Jan 2023 • Qiao Wang
This study examines the convexification version of the backward differential flow algorithm for the global minimization of polynomials, introduced by O. Arikan \textit{et al} in \cite{ABK}.
no code implementations • 11 Oct 2021 • Xiang Zhang, Qiao Wang
Different from many existing chain structure based methods in which the priors like temporal homogeneity can only describe the variations of two consecutive graphs, we propose a structure named \emph{temporal graph} to characterize the underlying real temporal relations.
no code implementations • 14 Sep 2021 • Ying Wang, Tingzhen Liu, Zepeng Bu, YuHui Huang, Lizhong Gao, Qiao Wang
In large-scale image retrieval, many indexing methods have been proposed to narrow down the searching scope of retrieval.
no code implementations • 10 May 2021 • Xiang Zhang, Yinfei Xu, Qinghe Liu, Zhicheng Liu, Jian Lu, Qiao Wang
To this end, we propose a graph learning framework using Wasserstein distributionally robust optimization (WDRO) which handles uncertainty in data by defining an uncertainty set on distributions of the observed data.
1 code implementation • 4 May 2020 • Zhicheng Liu, Fabio Miranda, Weiting Xiong, Junyan Yang, Qiao Wang, Claudio T. Silva
Then, an attention mechanism is proposed based on the framework of graph attention network (GAT) to capture the spatial correlations and encode geographic contextual information to embedding space.
no code implementations • 9 Aug 2019 • Zheng Wang, Qiao Wang, Tingzhang Zhao, Xiaojun Ye
Feature selection, an effective technique for dimensionality reduction, plays an important role in many machine learning systems.
1 code implementation • CVPR 2019 • Suhas Lohit, Qiao Wang, Pavan Turaga
We call this a temporal transformer network (TTN).
no code implementations • 30 May 2019 • Ning Wang, Xianhan Zeng, Renjie Xie, Zefei Gao, Yi Zheng, Ziran Liao, Junyan Yang, Qiao Wang
Furthermore, we draw a series of heuristic conclusions from the intrinsic information hidden in true images.
no code implementations • 23 Feb 2019 • Renjie Xie, Yanzhi Chen, Yan Wo, Qiao Wang
Deep neural networks (DNN) have been a de facto standard for nowadays biometric recognition solutions.
no code implementations • 19 Jul 2017 • Qiao Wang, Zheng Wang, Xiaojun Ye
LINE [1], as an efficient network embedding method, has shown its effectiveness in dealing with large-scale undirected, directed, and/or weighted networks.
no code implementations • CVPR 2016 • Adrien Gaidon, Qiao Wang, Yohann Cabon, Eleonora Vig
We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance.