Search Results for author: Gabriel Turinici

Found 14 papers, 0 papers with code

Optimal time sampling in physics-informed neural networks

no code implementations29 Apr 2024 Gabriel Turinici

Physics-informed neural networks (PINN) is a extremely powerful paradigm used to solve equations encountered in scientific computing applications.

Transformer for Times Series: an Application to the S&P500

no code implementations4 Mar 2024 Pierre Brugiere, Gabriel Turinici

The transformer models have been extensively used with good results in a wide area of machine learning applications including Large Language Models and image generation.

Image Generation Time Series

On the Convergence Rate of the Stochastic Gradient Descent (SGD) and application to a modified policy gradient for the Multi Armed Bandit

no code implementations9 Feb 2024 Stefana Anita, Gabriel Turinici

We present a self-contained proof of the convergence rate of the Stochastic Gradient Descent (SGD) when the learning rate follows an inverse time decays schedule; we next apply the results to the convergence of a modified form of policy gradient Multi-Armed Bandit (MAB) with $L2$ regularization.

L2 Regularization

Onflow: an online portfolio allocation algorithm

no code implementations8 Dec 2023 Gabriel Turinici, Pierre Brugiere

We introduce Onflow, a reinforcement learning technique that enables online optimization of portfolio allocation policies based on gradient flows.

Stochastic Optimization

High order universal portfolios

no code implementations22 Nov 2023 Gabriel Turinici

The Cover universal portfolio (UP from now on) has many interesting theoretical and numerical properties and was investigated for a long time.

Deep Conditional Measure Quantization

no code implementations17 Jan 2023 Gabriel Turinici

Quantization of a probability measure means representing it with a finite set of Dirac masses that approximates the input distribution well enough (in some metric space of probability measures).

Quantization

Huber-energy measure quantization

no code implementations15 Dec 2022 Gabriel Turinici

We describe a measure quantization procedure i. e., an algorithm which finds the best approximation of a target probability law (and more generally signed finite variation measure) by a sum of $Q$ Dirac masses ($Q$ being the quantization parameter).

Quantization Stochastic Optimization

Diversity in deep generative models and generative AI

no code implementations19 Feb 2022 Gabriel Turinici

The decoder-based machine learning generative algorithms such as Generative Adversarial Networks (GAN), Variational Auto-Encoders (VAE), Transformers show impressive results when constructing objects similar to those in a training ensemble.

BIG-bench Machine Learning Image Generation +1

Algorithms that get old : the case of generative deep neural networks

no code implementations7 Feb 2022 Gabriel Turinici

Generative deep neural networks used in machine learning, like the Variational Auto-Encoders (VAE), and Generative Adversarial Networks (GANs) produce new objects each time when asked to do so with the constraint that the new objects remain similar to some list of examples given as input.

Convergence dynamics of Generative Adversarial Networks: the dual metric flows

no code implementations18 Dec 2020 Gabriel Turinici

Fitting neural networks often resorts to stochastic (or similar) gradient descent which is a noise-tolerant (and efficient) resolution of a gradient descent dynamics.

Architectures of epidemic models: accommodating constraints from empirical and clinical data

no code implementations15 Dec 2020 Gabriel Turinici

Deterministic compartmental models have been used extensively in modeling epidemic propagation.

Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent

no code implementations20 Feb 2020 Imen Ayadi, Gabriel Turinici

The minimization of the loss function is of paramount importance in deep neural networks.

Radon Sobolev Variational Auto-Encoders

no code implementations29 Nov 2019 Gabriel Turinici

The quality of generative models (such as Generative adversarial networks and Variational Auto-Encoders) depends heavily on the choice of a good probability distance.

Stochastic learning control of inhomogeneous quantum ensembles

no code implementations7 Jun 2019 Gabriel Turinici

In quantum control, the robustness with respect to uncertainties in the system's parameters or driving field characteristics is of paramount importance and has been studied theoretically, numerically and experimentally.

Cannot find the paper you are looking for? You can Submit a new open access paper.