no code implementations • 3 Feb 2024 • Yuma Ichikawa, Hiroaki Iwashita
Additionally, formulated objective functions and constraints often only approximate real-world scenarios, where the optimal solution is not necessarily the best solution for the original real-world problem.
no code implementations • 24 Oct 2023 • Yuma Ichikawa, Koji Hukushima
To mitigate this problem, an adjustable hyperparameter $\beta$ and a strategy for annealing this parameter, called KL annealing, are proposed.
no code implementations • 29 Sep 2023 • Yuma Ichikawa
Unsupervised learning (UL)-based solvers for combinatorial optimization (CO) train a neural network whose output provides a soft solution by directly optimizing the CO objective using a continuous relaxation strategy.
no code implementations • 14 Sep 2023 • Yuma Ichikawa, Koji Hukushima
This paper presents a closed-form expression to assess the relationship between the beta in VAE, the dataset size, the posterior collapse, and the rate-distortion curve by analyzing a minimal VAE in a high-dimensional limit.
no code implementations • 25 Nov 2022 • Yuma Ichikawa, Akira Nakagawa, Hiromoto Masayuki, Yuhei Umeda
However, SLMC methods are difficult to directly apply to multimodal distributions for which training data are difficult to obtain.