no code implementations • 4 Apr 2023 • Kaan Gokcesu, Hakan Gokcesu
This paper focuses on optimal unimodal transformation of the score outputs of a univariate learning model under linear loss functions.
no code implementations • 30 Mar 2023 • Kaan Gokcesu, Hakan Gokcesu
We investigate the nonlinear regression problem under L2 loss (square loss) functions.
no code implementations • 24 Mar 2023 • Kaan Gokcesu, Hakan Gokcesu
This study presents an effective global optimization technique designed for multivariate functions that are H\"older continuous.
no code implementations • 19 Mar 2023 • Kaan Gokcesu, Hakan Gokcesu
Our study focuses on determining the best weight windows for a weighted moving average smoother under squared loss.
no code implementations • 14 Mar 2023 • Kaan Gokcesu, Hakan Gokcesu
Our research deals with the optimization version of the set partition problem, where the objective is to minimize the absolute difference between the sums of the two disjoint partitions.
no code implementations • 12 Mar 2023 • Kaan Gokcesu, Hakan Gokcesu
Our algorithm works from a universal prediction perspective and the performance measure used is the expected regret against arbitrary comparator sequences, which is the difference between our losses and a competing loss sequence.
no code implementations • 6 Sep 2022 • Kaan Gokcesu, Hakan Gokcesu
In this work, we propose a meta algorithm that can solve a multivariate global optimization problem using univariate global optimizers.
no code implementations • 29 Jun 2022 • Kaan Gokcesu, Hakan Gokcesu
We investigate an auto-regressive formulation for the problem of smoothing time-series by manipulating the inherent objective function of the traditional moving mean smoothers.
no code implementations • 6 Jun 2022 • Kaan Gokcesu, Hakan Gokcesu
In this work, we propose an efficient minimax optimal global optimization algorithm for multivariate Lipschitz continuous functions.
no code implementations • 1 Jun 2022 • Kaan Gokcesu, Hakan Gokcesu
We study the sequential calibration of estimations in a quantized isotonic L2 regression setting.
no code implementations • 18 Apr 2022 • Kaan Gokcesu, Hakan Gokcesu
We propose a decomposition method for the spectral peaks in an observed frequency spectrum, which is efficiently acquired by utilizing the Fast Fourier Transform.
no code implementations • 13 Apr 2022 • Kaan Gokcesu, Hakan Gokcesu
We study the problem of expert advice under partial bandit feedback setting and create a sequential minimax optimal algorithm.
no code implementations • 27 Mar 2022 • Kaan Gokcesu, Hakan Gokcesu
We propose a multi-tone decomposition algorithm that can find the frequencies, amplitudes and phases of the fundamental sinusoids in a noisy observation sequence.
no code implementations • 22 Mar 2022 • Kaan Gokcesu, Hakan Gokcesu
We propose a new tournament structure that combines the popular knockout tournaments and the round-robin tournaments.
no code implementations • 15 Mar 2022 • Kaan Gokcesu, Hakan Gokcesu
We propose a nearest neighbor based clustering algorithm that results in a naturally defined hierarchy of clusters.
no code implementations • 10 Mar 2022 • Kaan Gokcesu, Hakan Gokcesu
We study the problem of multiway number partition optimization, which has a myriad of applications in the decision, learning and optimization literature.
no code implementations • 6 Mar 2022 • Kaan Gokcesu, Hakan Gokcesu
We show that the best rectangle window is optimal for such window definitions.
no code implementations • 22 Feb 2022 • Kaan Gokcesu, Hakan Gokcesu
We propose an extended generalization of the pseudo Huber loss formulation.
no code implementations • 18 Jan 2022 • Kaan Gokcesu, Hakan Gokcesu
For a search space of $[0, 1]$, our approach has at most $L\log (3T)$ and $2. 25H$ regret for $L$-Lipschitz continuous and $H$-Lipschitz smooth functions respectively.
no code implementations • 31 Oct 2021 • Kaan Gokcesu, Hakan Gokcesu
We start by studying the traditional square error setting with its weighted variant and show that the optimal monotone transform is in the form of a unique staircase function.
no code implementations • 28 Sep 2021 • Kaan Gokcesu, Hakan Gokcesu
We investigate the problem of online learning, which has gained significant attention in recent years due to its applicability in a wide range of fields from machine learning to game theory.
no code implementations • 19 Sep 2021 • Kaan Gokcesu, Hakan Gokcesu
We study the adversarial multi-armed bandit problem and create a completely online algorithmic framework that is invariant under arbitrary translations and scales of the arm losses.
no code implementations • 16 Sep 2021 • Kaan Gokcesu, Hakan Gokcesu
We study the optimization version of the equal cardinality set partition problem (where the absolute difference between the equal sized partitions' sums are minimized).
no code implementations • 10 Sep 2021 • Kaan Gokcesu, Hakan Gokcesu
We study the optimization version of the set partition problem (where the difference between the partition sums are minimized), which has numerous applications in decision theory literature.
no code implementations • 5 Sep 2021 • Kaan Gokcesu, Hakan Gokcesu
In this paper, we propose a nonparametric approach that can be used in envelope extraction, peak-burst detection and clustering in time series.
no code implementations • 28 Aug 2021 • Kaan Gokcesu, Hakan Gokcesu
We propose a generalized formulation of the Huber loss.
no code implementations • 24 Aug 2021 • Kaan Gokcesu, Hakan Gokcesu
For $L$-Lipschitz continuous functions, we show that the cumulative regret is $O(L\log T)$.
no code implementations • 19 Aug 2021 • Kaan Gokcesu, Hakan Gokcesu
In this work, we aim to calibrate the score outputs of an estimator for the binary classification problem by finding an 'optimal' mapping to class probabilities, where the 'optimal' mapping is in the sense that minimizes the classification error (or equivalently, maximizes the accuracy).
no code implementations • 13 Aug 2021 • Kaan Gokcesu, Hakan Gokcesu
The best dynamic estimation sequence that we compete against is selected in hindsight with full observation of the loss functions and is allowed to select different optimal estimations in different time intervals (segments).
no code implementations • 19 Sep 2020 • Kaan Gokcesu, Hakan Gokcesu
By exploiting this relation, specific learning systems can be designed that perform asymptotically optimal for various applications.
no code implementations • 9 Sep 2020 • Kaan Gokcesu, Hakan Gokcesu
In this work, we aim to create a completely online algorithmic framework for prediction with expert advice that is translation-free and scale-free of the expert losses.
no code implementations • 25 Nov 2019 • N. Mert Vural, Hakan Gokcesu, Kaan Gokcesu, Suleyman S. Kozat
To construct our algorithm, we introduce a new expert advice algorithm for the multiple-play setting.
no code implementations • 29 May 2019 • Hakan Gokcesu, Kaan Gokcesu, Suleyman Serdar Kozat
We study the min-max optimization problem where each function contributing to the max operation is strongly-convex and smooth with bounded gradient in the search domain.
no code implementations • 5 Dec 2016 • Mohammadreza Mohaghegh Neyshabouri, Kaan Gokcesu, Huseyin Ozkan, Suleyman S. Kozat
Therefore, we design our algorithms based on the optimal adaptive combination and asymptotically achieve the performance of the best mapping as well as the best arm selection policy.