Bilevel Optimization
96 papers with code • 0 benchmarks • 0 datasets
Bilevel Optimization is a branch of optimization, which contains a nested optimization problem within the constraints of the outer optimization problem. The outer optimization task is usually referred as the upper level task, and the nested inner optimization task is referred as the lower level task. The lower level problem appears as a constraint, such that only an optimal solution to the lower level optimization problem is a possible feasible candidate to the upper level optimization problem.
Source: Efficient Evolutionary Algorithm for Single-Objective Bilevel Optimization
Benchmarks
These leaderboards are used to track progress in Bilevel Optimization
Most implemented papers
BOML: A Modularized Bilevel Optimization Library in Python for Meta Learning
learning to learn) has recently emerged as a promising paradigm for a variety of applications.
Bilevel Optimization: Convergence Analysis and Enhanced Design
For the AID-based method, we orderwisely improve the previous convergence rate analysis due to a more practical parameter selection as well as a warm start strategy, and for the ITD-based method we establish the first theoretical convergence rate.
BOBCAT: Bilevel Optimization-Based Computerized Adaptive Testing
Computerized adaptive testing (CAT) refers to a form of tests that are personalized to every student/test taker.
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-start
We analyse a general class of bilevel problems, in which the upper-level problem consists in the minimization of a smooth objective function and the lower-level problem is to find the fixed point of a smooth contraction map.
Single-level Adversarial Data Synthesis based on Neural Tangent Kernels
In this paper, we propose a new generative model called the generative adversarial NTK (GA-NTK) that has a single-level objective.
Nystrom Method for Accurate and Scalable Implicit Differentiation
The essential difficulty of gradient-based bilevel optimization using implicit differentiation is to estimate the inverse Hessian vector product with respect to neural network parameters.
Self-Supervised Dataset Distillation for Transfer Learning
To achieve this, we also introduce the MSE between representations of the inner model and the self-supervised target model on the original full dataset for outer optimization.
Embarassingly Simple Dataset Distillation
Re-examining the foundational back-propagation through time method, we study the pronounced variance in the gradients, computational burden, and long-term dependencies.
Convex and Bilevel Optimization for Neuro-Symbolic Inference and Learning
We address a key challenge for neuro-symbolic (NeSy) systems by leveraging convex and bilevel optimization techniques to develop a general gradient-based framework for end-to-end neural and symbolic parameter learning.
Multi-rendezvous Spacecraft Trajectory Optimization with Beam P-ACO
The design of spacecraft trajectories for missions visiting multiple celestial bodies is here framed as a multi-objective bilevel optimization problem.