Search Results for author: Marten van Dijk

Found 25 papers, 4 papers with code

Quantifying and Mitigating Privacy Risks for Tabular Generative Models

no code implementations12 Mar 2024 Chaoyi Zhu, Jiayi Tang, Hans Brouwer, Juan F. Pérez, Marten van Dijk, Lydia Y. Chen

The backbone technology of tabular synthesizers is rooted in image generative models, ranging from Generative Adversarial Networks (GANs) to recent diffusion models.

Privacy Preserving

Considerations on the Theory of Training Models with Differential Privacy

no code implementations8 Mar 2023 Marten van Dijk, Phuong Ha Nguyen

In federated learning collaborative learning takes place by a set of clients who each want to remain in control of how their local training data is used, in particular, how can each client's local training data remain private?

Federated Learning

Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis

no code implementations19 Dec 2022 Quoc Tran-Dinh, Marten van Dijk

In this book chapter, we briefly describe the main components that constitute the gradient descent method and its accelerated and stochastic variants.

Vocal Bursts Type Prediction

Generalizing DP-SGD with Shuffling and Batch Clipping

no code implementations12 Dec 2022 Marten van Dijk, Phuong Ha Nguyen, Toan N. Nguyen, Lam M. Nguyen

Classical differential private DP-SGD implements individual clipping with random subsampling, which forces a mini-batch SGD approach.

Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning

1 code implementation26 Nov 2022 Ethan Rathbun, Kaleel Mahmood, Sohaib Ahmad, Caiwen Ding, Marten van Dijk

First, how can the low transferability between defenses be utilized in a game theoretic framework to improve the robustness?

Adversarial Defense

Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution

no code implementations7 Feb 2022 Lam M. Nguyen, Trang H. Tran, Marten van Dijk

How and under what assumptions is guaranteed convergence to a \textit{global} minimum possible?

Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks

no code implementations29 Sep 2021 Kaleel Mahmood, Rigel Mahmood, Ethan Rathbun, Marten van Dijk

In this paper, we seek to help alleviate this problem by systematizing the recent advances in adversarial machine learning black-box attacks since 2019.

BIG-bench Machine Learning

New Perspective on the Global Convergence of Finite-Sum Optimization

no code implementations29 Sep 2021 Lam M. Nguyen, Trang H. Tran, Marten van Dijk

How and under what assumptions is guaranteed convergence to a \textit{global} minimum possible?

Proactive DP: A Multple Target Optimization Framework for DP-SGD

no code implementations17 Feb 2021 Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Phuong Ha Nguyen

Generally, DP-SGD is $(\epsilon\leq 1/2,\delta=1/N)$-DP if $\sigma=\sqrt{2(\epsilon +\ln(1/\delta))/\epsilon}$ with $T$ at least $\approx 2k^2/\epsilon$ and $(2/e)^2k^2-1/2\geq \ln(N)$, where $T$ is the total number of rounds, and $K=kN$ is the total number of gradient computations where $k$ measures $K$ in number of epochs of size $N$ of the local data set.

2k

Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes

no code implementations27 Oct 2020 Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc Tran-Dinh, Phuong Ha Nguyen

We consider big data analysis where training data is distributed among local data sets in a heterogeneous way -- and we wish to move SGD computations to local compute nodes where local data resides.

Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples

1 code implementation18 Jun 2020 Kaleel Mahmood, Deniz Gurevin, Marten van Dijk, Phuong Ha Nguyen

We provide this large scale study and analyses to motivate the field to move towards the development of more robust black-box defenses.

A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning

1 code implementation1 Mar 2020 Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Phuong Ha Nguyen, Marten van Dijk, Quoc Tran-Dinh

We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization.

reinforcement-learning Reinforcement Learning (RL)

Finite-Sum Smooth Optimization with SARAH

no code implementations22 Jan 2019 Lam M. Nguyen, Marten van Dijk, Dzung T. Phan, Phuong Ha Nguyen, Tsui-Wei Weng, Jayant R. Kalagnanam

The total complexity (measured as the total number of gradient computations) of a stochastic first-order optimization algorithm that finds a first-order stationary point of a finite-sum smooth nonconvex objective function $F(w)=\frac{1}{n} \sum_{i=1}^n f_i(w)$ has been proven to be at least $\Omega(\sqrt{n}/\epsilon)$ for $n \leq \mathcal{O}(\epsilon^{-2})$ where $\epsilon$ denotes the attained accuracy $\mathbb{E}[ \|\nabla F(\tilde{w})\|^2] \leq \epsilon$ for the outputted approximation $\tilde{w}$ (Fang et al., 2018).

DTN: A Learning Rate Scheme with Convergence Rate of $\mathcal{O}(1/t)$ for SGD

no code implementations22 Jan 2019 Lam M. Nguyen, Phuong Ha Nguyen, Dzung T. Phan, Jayant R. Kalagnanam, Marten van Dijk

This paper has some inconsistent results, i. e., we made some failed claims because we did some mistakes for using the test criterion for a series.

LEMMA valid

New Convergence Aspects of Stochastic Gradient Algorithms

no code implementations10 Nov 2018 Lam M. Nguyen, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg, Martin Takáč, Marten van Dijk

We show the convergence of SGD for strongly convex objective function without using bounded gradient assumption when $\{\eta_t\}$ is a diminishing sequence and $\sum_{t=0}^\infty \eta_t \rightarrow \infty$.

Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD

no code implementations9 Oct 2018 Marten van Dijk, Lam M. Nguyen, Phuong Ha Nguyen, Dzung T. Phan

We study Stochastic Gradient Descent (SGD) with diminishing step sizes for convex objective functions.

SGD and Hogwild! Convergence Without the Bounded Gradients Assumption

no code implementations ICML 2018 Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, Peter Richtárik, Katya Scheinberg, Martin Takáč

In (Bottou et al., 2016), a new analysis of convergence of SGD is performed under the assumption that stochastic gradients are bounded with respect to the true gradient norm.

BIG-bench Machine Learning

Intrinsically Reliable and Lightweight Physical Obfuscated Keys

no code implementations21 Mar 2017 Raihan Sayeed Khan, Nadim Kanan, Chenglu Jin, Jake Scoggin, Nafisa Noor, Sadid Muneer, Faruk Dirisaglik, Phuong Ha Nguyen, Helena Silva, Marten van Dijk, Ali Gokirmak

Physical Obfuscated Keys (POKs) allow tamper-resistant storage of random keys based on physical disorder.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.