no code implementations • 18 Apr 2024 • Xiao Wang, Ke Tang, Xingyuan Dai, Jintao Xu, Quancheng Du, Rui Ai, Yuxiao Wang, Weihao Gu
To effectively assess the risks prevailing in the vicinity of AVs in social interactive traffic scenarios and achieve safe autonomous driving, this article proposes a social-suitable and safety-sensitive trajectory planning (S4TP) framework.
1 code implementation • 27 Feb 2024 • Wenqi Zhang, Ke Tang, Hai Wu, Mengna Wang, Yongliang Shen, Guiyang Hou, Zeqi Tan, Peng Li, Yueting Zhuang, Weiming Lu
Large Language Models exhibit robust problem-solving capabilities for diverse tasks.
1 code implementation • 26 Feb 2024 • Tianyu Zhang, Chengbin Hou, Rui Jiang, Xuegong Zhang, Chenghu Zhou, Ke Tang, Hairong Lv
Considering the NIE problem, LICAP adopts a novel sampling strategy called top nodes preferred hierarchical sampling to first group all interested nodes into a top bin and a non-top bin based on node importance scores, and then divide the nodes within top bin into several finer bins also based on the scores.
1 code implementation • 22 Jan 2024 • Pengyi Li, Jianye Hao, Hongyao Tang, Xian Fu, Yan Zheng, Ke Tang
Specifically, we systematically summarize recent advancements in relevant algorithms and identify three primary research directions: EA-assisted optimization of RL, RL-assisted optimization of EA, and synergistic optimization of EA and RL.
no code implementations • 4 Dec 2023 • Hui Ouyang, Cheng Chen, Ke Tang
Our approach significantly improves the scalability and accuracy of 2-TBN structure learning.
no code implementations • 2 Dec 2023 • Muyao Zhong, Shengcai Liu, Bingdong Li, Haobo Fu, Ke Tang, Peng Yang
With this advantage, this paper is able to at the first time report the results of solving 1000-dimensional TSPs by training a PtrNet on the same dimensionality, which strongly suggests that scaling up the training instances is in need to improve the performance of PtrNet on solving higher-dimensional COPs.
1 code implementation • 29 Oct 2023 • Shengcai Liu, Caishun Chen, Xinghua Qu, Ke Tang, Yew-Soon Ong
Specifically, in each generation of the evolutionary search, LMEA instructs the LLM to select parent solutions from current population, and perform crossover and mutation to generate offspring solutions.
no code implementations • 15 Oct 2023 • Jiahao Wu, Qijiong Liu, Hengchang Hu, Wenqi Fan, Shengcai Liu, Qing Li, Xiao-Ming Wu, Ke Tang
Notably, the condensation paradigm of this method is forward and free from iterative optimization on the synthesized dataset.
no code implementations • 13 Oct 2023 • Dan-Xuan Liu, Yu-Ran Gu, Chao Qian, Xin Mu, Ke Tang
In this paper, we propose a new framework MR-EMO based on Evolutionary Multi-objective Optimization, which reformulates Migrant Resettlement as a bi-objective optimization problem that maximizes the expected number of employed migrants and minimizes the number of dispatched migrants simultaneously, and employs a Multi-Objective Evolutionary Algorithm (MOEA) to solve the bi-objective problem.
no code implementations • 2 Oct 2023 • Jiahao Wu, Wenqi Fan, Shengcai Liu, Qijiong Liu, Rui He, Qing Li, Ke Tang
However, applying existing approaches to condense recommendation datasets is impractical due to following challenges: (i) sampling-based methods are inadequate in addressing the long-tailed distribution problem; (ii) synthesizing-based methods are not applicable due to discreteness of interactions and large size of recommendation datasets; (iii) neither of them fail to address the specific issue in recommendation of false negative items, where items with potential user interest are incorrectly sampled as negatives owing to insufficient exposure.
no code implementations • 22 Sep 2023 • Jiahao Wu, Wenqi Fan, Shengcai Liu, Qijiong Liu, Qing Li, Ke Tang
To model the compatibility between user intents and item properties, we design the user-item co-clustering module, maximizing the mutual information of co-clusters of users and items.
no code implementations • 27 Aug 2023 • Wenjie Chen, Shengcai Liu, Yew-Soon Ong, Ke Tang
Moreover, given a real-time constraint of one minute, the NIE-based method can solve IBM problems with up to hundreds of thousands of nodes, which is at least one order of magnitude larger than what can be solved by existing methods.
2 code implementations • 26 Jun 2023 • Xuanfeng Li, Shengcai Liu, Jin Wang, Xiao Chen, Yew-Soon Ong, Ke Tang
In particular, we focus on the practical scenario of CCMCKP, where the probability distributions of random weights are unknown but only sample data is available.
no code implementations • 20 Jun 2023 • Kai Feng, Han Hong, Ke Tang, Jingyuan Wang
This paper proposes a statistical framework with which artificial intelligence can improve human decision making.
no code implementations • 19 Jun 2023 • Rui He, Zeyu Dai, Shan He, Ke Tang
Active Learning (AL) presents an encouraging solution to this issue by annotating a smaller number of highly informative instances, thereby reducing the labeling effort.
1 code implementation • 18 May 2023 • Ning Lu, Shengcai Liu, Rui He, Qi Wang, Yew-Soon Ong, Ke Tang
Large language models (LLMs) have shown remarkable performance in various tasks and have been extensively utilized by the public.
no code implementations • 4 May 2023 • Rui He, Shengcai Liu, Jiahao Wu, Shan He, Ke Tang
Multi-domain learning (MDL) refers to simultaneously constructing a model or a set of models on datasets collected from different domains.
1 code implementation • 12 Mar 2023 • Zhenwei Zhang, Haorui Yan, Ke Tang, Yuping Duan
The meta-learning strategy is used to obtain a pre-trained model on the synthetic underwater dataset, which contains different types of degradation to cover the various underwater environments.
no code implementations • 6 Feb 2023 • Ning Lu, Shengcai Liu, Zhirui Zhang, Qi Wang, Haifeng Liu, Ke Tang
Our comprehensive experiments reveal that in approximately 90\% of cases, word-level attacks lead to the generation of examples where the frequency of $n$-grams decreases, a tendency we term as the $n$-gram Frequency Descend ($n$-FD).
no code implementations • 31 Jan 2023 • Lan Tang, Xiaxi Li, Jinyuan Zhang, Guiying Li, Peng Yang, Ke Tang
The training process is accelerated up to 7x on tested games, comparing to its counterpart without the surrogate and PE.
1 code implementation • 23 Nov 2022 • Shengcai Liu, Fu Peng, Ke Tang
Attack Ensemble (AE), which combines multiple attacks together, provides a reliable way to evaluate adversarial robustness.
no code implementations • 13 Oct 2022 • Shaohui Peng, Xing Hu, Rui Zhang, Ke Tang, Jiaming Guo, Qi Yi, Ruizhi Chen, Xishan Zhang, Zidong Du, Ling Li, Qi Guo, Yunji Chen
To address this issue, we propose CDHRL, a causality-driven hierarchical reinforcement learning framework, leveraging a causality-driven discovery instead of a randomness-driven exploration to effectively build high-quality hierarchical structures in complicated environments.
Hierarchical Reinforcement Learning reinforcement-learning +1
no code implementations • 22 Sep 2022 • Shengcai Liu, Yu Zhang, Ke Tang, Xin Yao
Hopefully, this work would help with a better understanding of the strengths and weaknesses of NCO and provide a comprehensive evaluation protocol for further benchmarking NCO approaches in comparison to other approaches.
1 code implementation • 18 Aug 2022 • Jiahao Wu, Wenqi Fan, Jingfan Chen, Shengcai Liu, Qing Li, Ke Tang
In this work, to address such limitation, we propose a novel Disentangled contrastive learning framework for social Recommendations DcRec.
no code implementations • 8 Aug 2022 • Darija Barak, Edoardo Gallo, Ke Rong, Ke Tang, Wei Du
On 11th Jan 2020, the first COVID-19 related death was confirmed in Wuhan, Hubei.
no code implementations • 1 Aug 2022 • Lang Feng, Wenjian Liu, Chuliang Guo, Ke Tang, Cheng Zhuo, Zhongfeng Wang
To improve the design quality while saving the cost, design automation for neural network accelerators was proposed, where design space exploration algorithms are used to automatically search the optimized accelerator design within a design space.
1 code implementation • 11 Jun 2022 • Wenjian Luo, Hongwei Zhang, Linghao Kong, Zhijian Chen, Ke Tang
The security issues in DNNs, such as adversarial examples, have attracted much attention.
1 code implementation • 4 Jun 2022 • Zeyu Dai, Shengcai Liu, Ke Tang, Qing Li
In this paper, we propose to restrict the perturbations to a small salient region to generate adversarial examples that can hardly be perceived.
1 code implementation • 17 Apr 2022 • Zhenwei Zhang, Ke Chen, Ke Tang, Yuping Duan
In this paper, we propose fast multi-grid algorithms for minimizing both mean curvature and Gaussian curvature energy functionals without sacrificing accuracy for efficiency.
1 code implementation • 23 Dec 2021 • Fu Peng, Shengcai Liu, Ning Lu, Ke Tang
This work considers a challenging Deep Neural Network(DNN) quantization task that seeks to train quantized DNNs without involving any full-precision operations.
1 code implementation • 2 Nov 2021 • Shengcai Liu, Ning Lu, Wenjing Hong, Chao Qian, Ke Tang
The field of adversarial textual attack has significantly grown over the last few years, where the commonly considered objective is to craft adversarial examples (AEs) that can successfully fool the target model.
no code implementations • 29 Sep 2021 • Zeyu Dai, Shengcai Liu, Ke Tang, Qing Li
To address this issue, in this paper we propose to use segmentation priors for black-box attacks such that the perturbations are limited in the salient region.
no code implementations • 29 Sep 2021 • Qi Yi, Jiaming Guo, Rui Zhang, Shaohui Peng, Xing Hu, Xishan Zhang, Ke Tang, Zidong Du, Qi Guo, Yunji Chen
Deep Reinforcement Learning (deep RL) has been successfully applied to solve various decision-making problems in recent years.
no code implementations • 21 Sep 2021 • Lin William Cong, Ke Tang, Bing Wang, Jingyuan Wang
We build a deep-learning-based SEIR-AIM model integrating the classical Susceptible-Exposed-Infectious-Removed epidemiology model with forecast modules of infection, community mobility, and unemployment.
no code implementations • 6 Sep 2021 • Shengcai Liu, Ning Lu, Cheng Chen, Ke Tang
Over the past few years, various word-level textual attack approaches have been proposed to reveal the vulnerability of deep neural networks used in natural language processing.
no code implementations • 24 Aug 2021 • Lin William Cong, Xi Li, Ke Tang, Yang Yang
We introduce systematic tests exploiting robust statistical and behavioral patterns in trading to detect fake transactions on 29 cryptocurrency exchanges.
no code implementations • 20 Aug 2021 • Lin William Cong, Ke Tang, Jingyuan Wang, Yang Zhang
We predict asset returns and measure risk premia using a prominent technique from artificial intelligence -- deep sequence modeling.
no code implementations • 5 Aug 2021 • Qi Yang, Peng Yang, Ke Tang
This paper proposes a framework of Active Reinforcement Learning (ARL) over MDPs to improve generalization efficiency in a limited resource by instance selection.
1 code implementation • 25 Jun 2021 • Rui He, Shengcai Liu, Shan He, Ke Tang
Active learning (AL) can be utilized in MDL to reduce the labeling effort by only using the most informative data.
3 code implementations • 30 May 2021 • Chengbin Hou, Guoji Fu, Peng Yang, Zheng Hu, Shan He, Ke Tang
It is natural to ask if existing DNE methods can perform well for an input dynamic network without smooth changes.
no code implementations • 20 Apr 2021 • Chao Qian, Dan-Xuan Liu, Chao Feng, Ke Tang
Evolutionary algorithms (EAs) are general-purpose optimization algorithms, inspired by natural evolution.
no code implementations • 20 Jan 2021 • Wenjie Chen, Shengcai Liu, Ke Tang
An unbiased estimator of the gradient of the new acquisition function is derived to implement the $c-\rm{KG}$ approach.
no code implementations • 28 Dec 2020 • Yu Zhang, Peter Tiňo, Aleš Leonardis, Ke Tang
Along with the great success of deep neural networks, there is also growing concern about their black-box nature.
1 code implementation • 12 Nov 2020 • Shengcai Liu, Ke Tang, Xin Yao
The Vehicle Routing Problem with Simultaneous Pickup-Delivery and Time Windows (VRPSPDTW) has attracted much research interest in the last decade, due to its wide application in modern logistics.
no code implementations • 15 Oct 2020 • Xiaojian Wang, Jingyuan Wang, Ke Tang
For global explanation, frequency-based and out-of-bag based methods are proposed to extract important features in the neural network decision.
no code implementations • 8 Sep 2020 • Hu Zhang, Peng Yang, Yanglong Yu, Mingjia Li, Ke Tang
Evolutionary algorithms (EAs) have been successfully applied to optimize the policies for Reinforcement Learning (RL) tasks due to their exploration ability.
2 code implementations • 5 Aug 2020 • Chengbin Hou, Han Zhang, Shan He, Ke Tang
The main and common objective of Dynamic Network Embedding (DNE) is to efficiently update node embeddings while preserving network topology at each time step.
no code implementations • 1 Jul 2020 • Ke Tang, Shengcai Liu, Peng Yang, Xin Yao
In the context of heuristic search, such a paradigm could be implemented as configuring the parameters of a parallel algorithm portfolio (PAP) based on a set of training problem instances, which is often referred to as PAP construction.
no code implementations • 9 Mar 2020 • Jingyuan Wang, Ke Tang, Kai Feng, Xin Li, Weifeng Lv, Kun Chen, Fei Wang
Primary outcome measures: Regression analysis of the impact of temperature and relative humidity on the effective reproductive number (R value).
no code implementations • NeurIPS 2019 • Yunwen Lei, Peng Yang, Ke Tang, Ding-Xuan Zhou
In this paper, we propose a theoretically sound strategy to select an individual iterate of the vanilla SCMD, which is able to achieve optimal rates for both convex and strongly convex problems in a non-smooth learning setting.
no code implementations • NeurIPS 2019 • Liangpeng Zhang, Ke Tang, Xin Yao
We argue that explicit planning for exploration can help alleviate such a problem, and propose a Value Iteration for Exploration Cost (VIEC) algorithm which computes the optimal exploration scheme by solving an augmented MDP.
no code implementations • 19 Nov 2019 • Shengcai Liu, Ke Tang, Yunwen Lei, Xin Yao
Over the last decade, research on automated parameter tuning, often referred to as automatic algorithm configuration (AAC), has made significant progress.
no code implementations • 16 Oct 2019 • Peng Yang, Qi Yang, Ke Tang, Xin Yao
Empirical results show that the significant advantages of NCS over the compared state-of-the-art methods can be highly owed to the effective parallel exploration ability.
no code implementations • 31 Jul 2019 • Xiaofen Lu, Ke Tang, Stefan Menzel, Xin Yao
In this paper, a new framework of employing EAs in the context of dynamic optimization is explored.
no code implementations • 28 Jul 2019 • Chao Bian, Chao Qian, Yang Yu, Ke Tang
Sampling is a popular strategy, which evaluates the objective a couple of times, and employs the mean of these evaluation results as an estimate of the objective value.
2 code implementations • arXiv 2019 • Chengbin Hou, Han Zhang, Ke Tang, Shan He
Dynamic network embedding aims to learn low dimensional embeddings for unseen and seen nodes by using any currently available snapshots of a dynamic network.
no code implementations • 24 Jul 2019 • Jingyuan Wang, Yang Zhang, Ke Tang, Junjie Wu, Zhang Xiong
Recent years have witnessed the successful marriage of finance innovations and AI techniques in various finance applications including quantitative trading (QT).
no code implementations • 17 Jun 2019 • Chao Bian, Chao Qian, Ke Tang, Yang Yu
Evolutionary algorithms (EAs) have found many successful real-world applications, where the optimization problems are often subject to a wide range of uncertainties.
no code implementations • 5 May 2019 • Kai Feng, Han Hong, Ke Tang, Jingyuan Wang
Our theoretical discussion is illustrated in the context of a large data set of pregnancy outcomes and doctor diagnosis from the Pre-Pregnancy Checkups of reproductive age couples in Henan Province provided by the Chinese Ministry of Health.
no code implementations • 3 Feb 2019 • Yunwen Lei, Ting Hu, Guiying Li, Ke Tang
While the behavior of SGD is well understood in the convex learning setting, the existing theoretical results for SGD applied to nonconvex objective functions are far from mature.
no code implementations • 6 Dec 2018 • Peng Yang, Ke Tang, Xin Yao
Large-scale optimization problems that involve thousands of decision variables have extensively arisen from various industrial areas.
no code implementations • NeurIPS 2018 • Yunwen Lei, Ke Tang
We apply the derived computational error bounds to study the generalization performance of multi-pass stochastic gradient descent (SGD) in a non-parametric setting.
1 code implementation • 28 Nov 2018 • Chengbin Hou, Shan He, Ke Tang
Attributed networks are ubiquitous since a network often comes with auxiliary attribute information e. g. a social network with user profiles.
no code implementations • 16 Oct 2018 • Yibo Zhang, Chao Qian, Ke Tang
Under a convex polytope constraint, we prove that LDGM can achieve a $(1-e^{-\beta}-\epsilon)$-approximation guarantee after $O(1/\epsilon)$ iterations, which is the same as the best previous gradient-based algorithm.
no code implementations • 11 Oct 2018 • Chao Qian, Chao Bian, Yang Yu, Ke Tang, Xin Yao
In noisy evolutionary optimization, sampling is a common strategy to deal with noise.
no code implementations • 17 Apr 2018 • Shengcai Liu, Ke Tang, Xin Yao
Simultaneously utilizing several complementary solvers is a simple yet effective strategy for solving computationally hard problems.
no code implementations • NeurIPS 2017 • Liangpeng Zhang, Ke Tang, Xin Yao
Under/overestimation of state/action values are harmful for reinforcement learning agents.
no code implementations • NeurIPS 2017 • Chao Qian, Jing-Cheng Shi, Yang Yu, Ke Tang, Zhi-Hua Zhou
The problem of selecting the best $k$-element subset from a universe is involved in many applications.
no code implementations • 20 Nov 2017 • Chao Qian, Yang Yu, Ke Tang, Xin Yao, Zhi-Hua Zhou
To provide a general theoretical explanation of the behavior of EAs, it is desirable to study their performance on general classes of combinatorial optimization problems.
no code implementations • 2 Nov 2017 • Chao Qian, Chao Bian, Wu Jiang, Ke Tang
We analyze the running time of the (1+1)-EA solving OneMax and LeadingOnes under bit-wise noise for the first time, and derive the ranges of the noise level for polynomial and super-polynomial running time bounds.
no code implementations • 3 Aug 2017 • Jinyuan Zhang, Aimin Zhou, Ke Tang, Guixu Zhang
Finally it uses the classifier to filter the unpromising candidate offspring solutions and choose a promising one from the generated candidate offspring set for each parent solution.
no code implementations • 12 Jun 2017 • Bingshui Da, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Zexuan Zhu, Chuan-Kang Ting, Ke Tang, Xin Yao
In this report, we suggest nine test problems for multi-task single-objective optimization (MTSOO), each of which consists of two single-objective optimization tasks that need to be solved simultaneously.
no code implementations • 29 Mar 2017 • Shengcai Liu, Ke Tang, Xin Yao
The idea behind LiangYi is to promote the population-based solver by training it (with the training module) to improve its performance on those instances (discovered by the sampling module) on which it performs badly, while keeping the good performances obtained by it on previous instances.
no code implementations • 18 Mar 2017 • Zhi-Zhong Liu, Yong Wang, Shengxiang Yang, Ke Tang
In the evolutionary computation research community, the performance of most evolutionary algorithms (EAs) depends strongly on their implemented coordinate system.
no code implementations • 12 Feb 2017 • Yu Sun, Ke Tang, Zexuan Zhu, Xin Yao
Incremental learning with concept drift has often been tackled by ensemble methods, where models built in the past can be re-trained to attain new models for the current data.
no code implementations • 2 Dec 2016 • Liangpeng Zhang, Ke Tang, Xin Yao
We then provide empirical results to verify our approach, and demonstrate how the success probability of exploration can be used to analyse and predict the behaviours and possible outcomes of exploration, which are the keys to the answer of the important questions of exploration.
no code implementations • 11 Mar 2016 • Peng Yang, Ke Tang, Xin Yao
Divide and Conquer (DC) is conceptually well suited to high-dimensional optimization by decomposing a problem into multiple small-scale sub-problems.
1 code implementation • 25 Jan 2016 • Guiying Li, Junlong Liu, Chunhui Jiang, Liangpeng Zhang, Minlong Lin, Ke Tang
R-CNN style methods are sorts of the state-of-the-art object detection methods, which consist of region proposal generation and deep CNN classification.
no code implementations • 20 Apr 2015 • Ke Tang, Peng Yang, Xin Yao
This paper presents a new EA, namely Negatively Correlated Search (NCS), which maintains multiple individual search processes in parallel and models the search behaviors of individual search processes as probability distributions.