no code implementations • 29 Jan 2024 • Wei Yao, Chengming Yu, Shangzhi Zeng, Jin Zhang
To address this challenge, we begin by devising a smooth proximal Lagrangian value function to handle the constrained lower-level problem.
no code implementations • 29 Jun 2023 • Lucy L. Gao, Jane J. Ye, Haian Yin, Shangzhi Zeng, Jin Zhang
In a recent study by Ye et al. (2023), a value function-based difference of convex algorithm was introduced to address bilevel programs.
no code implementations • 11 Feb 2023 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
In recent years, by utilizing optimization techniques to formulate the propagation of deep model, a variety of so-called Optimization-Derived Learning (ODL) approaches have been proposed to address diverse learning and vision tasks.
1 code implementation • 7 Feb 2023 • Risheng Liu, Yaohua Liu, Wei Yao, Shangzhi Zeng, Jin Zhang
Gradient methods have become mainstream techniques for Bi-Level Optimization (BLO) in learning fields.
no code implementations • 16 Jun 2022 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
Recently, Optimization-Derived Learning (ODL) has attracted attention from learning and vision areas, which designs learning models from the perspective of optimization.
1 code implementation • 13 Jun 2022 • Lucy Gao, Jane J. Ye, Haian Yin, Shangzhi Zeng, Jin Zhang
Gradient-based optimization methods for hyperparameter tuning guarantee theoretical convergence to stationary solutions when for fixed upper-level variable values, the lower level of the bilevel program is strongly convex (LLSC) and smooth (LLS).
no code implementations • 20 May 2022 • Risheng Liu, Xuan Liu, Wei Yao, Shangzhi Zeng, Jin Zhang
Gradient methods have become mainstream techniques for Bi-Level Optimization (BLO) in learning and vision fields.
1 code implementation • 11 Oct 2021 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
We also extend BVFSM to address BLO with additional functional constraints.
1 code implementation • NeurIPS 2021 • Risheng Liu, Yaohua Liu, Shangzhi Zeng, Jin Zhang
In particular, by introducing an auxiliary as initialization to guide the optimization dynamics and designing a pessimistic trajectory truncation operation, we construct a reliable approximate version of the original BLO in the absence of LLC hypothesis.
no code implementations • 15 Jun 2021 • Risheng Liu, Xuan Liu, Xiaoming Yuan, Shangzhi Zeng, Jin Zhang
Bi-level optimization model is able to capture a wide range of complex learning tasks with practical interest.
1 code implementation • 16 Feb 2021 • Risheng Liu, Pan Mu, Xiaoming Yuan, Shangzhi Zeng, Jin Zhang
In this work, we formulate BLOs from an optimistic bi-level viewpoint and establish a new gradient-based algorithmic framework, named Bi-level Descent Aggregation (BDA), to partially address the above issues.
no code implementations • ICML 2020 • Risheng Liu, Pan Mu, Xiaoming Yuan, Shangzhi Zeng, Jin Zhang
In recent years, a variety of gradient-based first-order methods have been developed to solve bi-level optimization problems for learning applications.
no code implementations • 6 Jul 2019 • Risheng Liu, Long Ma, Xiaoming Yuan, Shangzhi Zeng, Jin Zhang
This paper firstly proposes a convex bilevel optimization paradigm to formulate and optimize popular learning and vision problems in real-world scenarios.