no code implementations • 3 May 2024 • Changliang Zhou, Xi Lin, Zhenkun Wang, Xialiang Tong, Mingxuan Yuan, Qingfu Zhang
The neural combinatorial optimization (NCO) approach has shown great potential for solving routing problems without the requirement of expert knowledge.
1 code implementation • 25 Apr 2024 • Yiming Yao, Fei Liu, Ji Cheng, Qingfu Zhang
Many real-world optimization scenarios involve expensive evaluation with unknown and heterogeneous costs.
no code implementations • 23 Apr 2024 • Zhe Zhao, Pengkun Wang, Xu Wang, Haibin Wen, Xiaolong Xie, Zhengyang Zhou, Qingfu Zhang, Yang Wang
Pre-training GNNs to extract transferable knowledge and apply it to downstream tasks has become the de facto standard of graph representation learning.
no code implementations • 30 Mar 2024 • Ping Guo, Qingfu Zhang, Xi Lin
In many real-world applications, the Pareto Set (PS) of a continuous multiobjective optimization problem can be a piecewise continuous manifold.
no code implementations • 28 Mar 2024 • Fu Luo, Xi Lin, Zhenkun Wang, Xialiang Tong, Mingxuan Yuan, Qingfu Zhang
The end-to-end neural combinatorial optimization (NCO) method shows promising performance in solving complex combinatorial optimization problems without the need for expert design.
no code implementations • 8 Mar 2024 • Ping Guo, Cheng Gong, Xi Lin, Zhiyuan Yang, Qingfu Zhang
To address this gap, we propose a new metric termed adversarial hypervolume, assessing the robustness of deep learning models comprehensively over a range of perturbation intensities from a multi-objective optimization standpoint.
no code implementations • 29 Feb 2024 • Xi Lin, Xiaoyuan Zhang, Zhiyuan Yang, Fei Liu, Zhenkun Wang, Qingfu Zhang
Multi-objective optimization problems can be found in many real-world applications, where the objectives often conflict each other and cannot be optimized by a single solution.
1 code implementation • 23 Feb 2024 • Fei Liu, Xi Lin, Zhenkun Wang, Qingfu Zhang, Xialiang Tong, Mingxuan Yuan
The results show that the unified model demonstrates superior performance in the eleven VRPs, reducing the average gap to around 5% from over 20% in the existing approach and achieving a significant performance boost on benchmark datasets as well as a real-world logistics application.
no code implementations • 14 Feb 2024 • Xiaoyuan Zhang, Xi Lin, Yichi Zhang, Yifan Chen, Qingfu Zhang
Multiobjective optimization (MOO) is prevalent in numerous applications, in which a Pareto front (PF) is constructed to display optima under various preferences.
no code implementations • 14 Feb 2024 • Xiaoyuan Zhang, Xi Lin, Qingfu Zhang
It is desirable in many multi-objective machine learning applications, such as multi-task learning with conflicting objectives and multi-objective reinforcement learning, to find a Pareto solution that can match a given preference of a decision maker.
Multi-Objective Reinforcement Learning Multi-Task Learning +1
no code implementations • 3 Feb 2024 • Yifan Zhong, Chengdong Ma, Xiaoyuan Zhang, Ziran Yang, Qingfu Zhang, Siyuan Qi, Yaodong Yang
Our work marks a step forward in effectively and efficiently aligning models to diverse and intricate human preferences in a controllable and Pareto-optimal manner.
2 code implementations • 27 Jan 2024 • Ping Guo, Fei Liu, Xi Lin, Qingchuan Zhao, Qingfu Zhang
In the rapidly evolving field of machine learning, adversarial attacks present a significant challenge to model robustness and security.
no code implementations • 19 Jan 2024 • Ping Guo, Zhiyuan Yang, Xi Lin, Qingchuan Zhao, Qingfu Zhang
Black-box query-based attacks constitute significant threats to Machine Learning as a Service (MLaaS) systems since they can generate adversarial examples without accessing the target model's architecture and parameters.
3 code implementations • 4 Jan 2024 • Fei Liu, Xialiang Tong, Mingxuan Yuan, Xi Lin, Fu Luo, Zhenkun Wang, Zhichao Lu, Qingfu Zhang
Heuristics are indispensable for tackling complex search and optimization problems.
2 code implementations • 26 Nov 2023 • Fei Liu, Xialiang Tong, Mingxuan Yuan, Qingfu Zhang
In this paper, we propose a novel approach called Algorithm Evolution using Large Language Model (AEL).
no code implementations • 31 Oct 2023 • Xi Lin, Xiaoyuan Zhang, Zhiyuan Yang, Qingfu Zhang
In this work, we make the first attempt to incorporate the structure constraints into the whole solution set by a single Pareto set model, which can be efficiently learned by a simple evolutionary stochastic optimization method.
1 code implementation • 19 Oct 2023 • Fei Liu, Xi Lin, Zhenkun Wang, Shunyu Yao, Xialiang Tong, Mingxuan Yuan, Qingfu Zhang
It is also promising to see the operator only learned from a few instances can have robust generalization performance on unseen problems with quite different patterns and settings.
1 code implementation • NeurIPS 2023 • Fu Luo, Xi Lin, Fei Liu, Qingfu Zhang, Zhenkun Wang
Neural combinatorial optimization (NCO) is a promising learning-based approach for solving challenging combinatorial optimization problems without specialized algorithm design by experts.
1 code implementation • 24 Jul 2023 • Xi Lin, Zhiyuan Yang, Xiaoyuan Zhang, Qingfu Zhang
Homotopy optimization is a traditional method to deal with a complicated optimization problem by solving a sequence of easy-to-hard surrogate subproblems.
no code implementations • 1 Mar 2023 • Fei Liu, Chengyu Lu, Lin Gui, Qingfu Zhang, Xialiang Tong, Mingxuan Yuan
Vehicle routing is a well-known optimization research topic with significant practical importance.
1 code implementation • 16 Oct 2022 • Xi Lin, Zhiyuan Yang, Xiaoyuan Zhang, Qingfu Zhang
This work represents the first attempt to model the Pareto set for expensive multi-objective optimization.
1 code implementation • 29 Mar 2022 • Xi Lin, Zhiyuan Yang, Qingfu Zhang
In this work, we generalize the idea of neural combinatorial optimization, and develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure.
no code implementations • 8 Nov 2021 • Jianfei Guo, Zhiyuan Yang, Xi Lin, Qingfu Zhang
By representing object instances within the same category as shape and appearance variation of a shared NeRF template, our proposed method can achieve dense shape correspondences reasoning on images for a wide range of object classes.
no code implementations • ICLR 2022 • Xi Lin, Zhiyuan Yang, Qingfu Zhang
In this work, we generalize the idea of neural combinatorial optimization, and develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure.
1 code implementation • ICCV 2021 • Zhiyu Zhu, Hui Liu, Junhui Hou, Huanqiang Zeng, Qingfu Zhang
Specifically, on the basis of the intrinsic imaging degradation model of RGB images from HS images, we progressively spread the differences between input RGB images and re-projected RGB images from recovered HS images via effective unsupervised camera spectral response function estimation.
1 code implementation • 12 Aug 2021 • Zhiyu Zhu, Hui Liu, Junhui Hou, Sen Jia, Qingfu Zhang
Then, we design a lightweight neural network with a multi-stage architecture to mimic the formed amended gradient descent process, in which efficient convolution and novel spectral zero-mean normalization are proposed to effectively extract spatial-spectral features for regressing an initialization, a basic gradient, and an incremental gradient.
1 code implementation • 8 Jul 2021 • Zhaoyi Yan, Ruimao Zhang, Hongzhi Zhang, Qingfu Zhang, WangMeng Zuo
One of the main issues in this task is how to handle the dramatic scale variations of pedestrians caused by the perspective effect.
1 code implementation • 2 Mar 2021 • Yuheng Jia, Hui Liu, Junhui Hou, Sam Kwong, Qingfu Zhang
Inspired by ensemble clustering that aims to seek a better clustering result from a set of clustering results, we propose self-supervised SNMF (S$^3$NMF), which is capable of boosting clustering performance progressively by taking advantage of the sensitivity to initialization characteristic of SNMF, without relying on any additional information.
1 code implementation • 16 Dec 2020 • Yuheng Jia, Hui Liu, Junhui Hou, Qingfu Zhang
The existing clustering ensemble methods generally construct a co-association matrix, which indicates the pairwise similarity between samples, as the weighted linear combination of the connective matrices from different base clusterings, and the resulting co-association matrix is then adopted as the input of an off-the-shelf clustering algorithm, e. g., spectral clustering.
2 code implementations • 6 Dec 2020 • Zhihao Peng, Yuheng Jia, Hui Liu, Junhui Hou, Qingfu Zhang
Furthermore, we design a novel framework to explicitly decouple the auto-encoder module and the self-expressiveness module.
no code implementations • 13 Oct 2020 • Xi Lin, Zhiyuan Yang, Qingfu Zhang, Sam Kwong
With a fixed model capacity, the tasks would be conflicted with each other, and the system usually has to make a trade-off among learning all of them together.
1 code implementation • 6 Jun 2020 • Jianyong Sun, Wei Zheng, Qingfu Zhang, Zongben Xu
Based on the new encoding method and the two objectives, a multiobjective evolutionary algorithm (MOEA) based upon NSGA-II, termed as continuous encoding MOEA, is developed for the transformed community detection problem with continuous decision variables.
no code implementations • 30 Apr 2020 • Yuheng Jia, Hui Liu, Junhui Hou, Sam Kwong, Qingfu Zhang
On the basis of the novel tensor low-rank norm, we formulate MVSC as a convex low-rank tensor recovery problem, which is then efficiently solved with an augmented Lagrange multiplier based method iteratively.
no code implementations • 15 Apr 2020 • Geoffrey Pruvost, Bilel Derbel, Arnaud Liefooghe, Ke Li, Qingfu Zhang
This paper intends to understand and to improve the working principle of decomposition-based multi-objective evolutionary algorithms.
1 code implementation • NeurIPS 2019 • Xi Lin, Hui-Ling Zhen, Zhenhua Li, Qingfu Zhang, Sam Kwong
Recently, a novel method is proposed to find one single Pareto optimal solution with good trade-off among different tasks by casting multi-task learning as multiobjective optimization.
no code implementations • 12 Nov 2019 • Jialong Shi, Jianyong Sun, Qingfu Zhang
For a sum-of-the-parts combinatorial optimization problem, we propose to decompose its original objective into two sub-objectives with controllable correlation.
no code implementations • 14 May 2019 • Jialong Shi, Jianyong Sun, Qingfu Zhang, Kai Ye
We first define the Homotopic Convex (HC) transformation of a TSP as a convex combination of a well-constructed simple TSP and the original TSP.
no code implementations • 4 Nov 2018 • Hui-Ling Zhen, Xi Lin, Alan Z. Tang, Zhenhua Li, Qingfu Zhang, Sam Kwong
Different from them, in this paper, we aim to link the generalization ability of a deep network to optimizing a new objective function.
no code implementations • 4 Nov 2018 • Xi Lin, Hui-Ling Zhen, Zhenhua Li, Qingfu Zhang, Sam Kwong
The proposed algorithm uses the Bayesian neural network as the scalable surrogate model.
no code implementations • 8 Jun 2018 • Xinye Cai, Haoran Sun, Chunyang Zhu, Zhenyu Li, Qingfu Zhang
In this paper, an evolutionary many-objective optimization algorithm based on corner solution search (MaOEA-CS) was proposed.
no code implementations • 11 Oct 2017 • Zhenhua Li, Qingfu Zhang
In this paper, we propose an efficient approximated rank one update for covariance matrix adaptation evolution strategy (CMA-ES).
no code implementations • 22 Sep 2017 • Jialong Shi, Qingfu Zhang, Edward Tsang
EB-GLS records and maintains an elite solution as an estimate of the globally optimal solutions, and reduces the chance of penalizing the features in this solution.
no code implementations • 15 Sep 2017 • Zhun Fan, Wenji Li, Xinye Cai, Hui Li, Caimin Wei, Qingfu Zhang, Kalyanmoy Deb, Erik D. Goodman
Compared with other CMOEAs, the proposed PPS method can more efficiently get across infeasible regions and converge to the feasible and non-dominated regions by applying push and pull search strategies at different stages.
no code implementations • 8 Jun 2017 • Yuan Yuan, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Bingshui Da, Qingfu Zhang, Kay Chen Tan, Yaochu Jin, Hisao Ishibuchi
In this report, we suggest nine test problems for multi-task multi-objective optimization (MTMOO), each of which consists of two multiobjective optimization tasks that need to be solved simultaneously.
no code implementations • 7 Apr 2017 • Mengyuan Wu, Ke Li, Sam Kwong, Qingfu Zhang
It decomposes a multi-objective optimization problem into several single-objective optimization subproblems, each of which is usually defined as a scalarizing function using a weight vector.
no code implementations • 21 Dec 2016 • Zhun Fan, Wenji Li, Xinye Cai, Hui Li, Caimin Wei, Qingfu Zhang, Kalyanmoy Deb, Erik D. Goodman
Multi-objective evolutionary algorithms (MOEAs) have progressed significantly in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems.
no code implementations • 30 Aug 2016 • Mengyuan Wu, Ke Li, Sam Kwong, Yu Zhou, Qingfu Zhang
In particular, the stable matching between subproblems and solutions, which achieves an equilibrium between their mutual preferences, implicitly strikes a balance between the convergence and diversity.
no code implementations • 16 Jun 2016 • Jianyong Sun, Hu Zhang, Aimin Zhou, Qingfu Zhang
Evolutionary algorithms (EAs) have been well acknowledged as a promising paradigm for solving optimisation problems with multiple conflicting objectives in the sense that they are able to locate a set of diverse approximations of Pareto optimal solutions in a single run.