Search Results for author: Tung Pham

Found 16 papers, 6 papers with code

Diversity-Aware Agnostic Ensemble of Sharpness Minimizers

no code implementations19 Mar 2024 Anh Bui, Vy Vo, Tung Pham, Dinh Phung, Trung Le

There has long been plenty of theoretical and empirical evidence supporting the success of ensemble learning.

Ensemble Learning

Robust Diffusion GAN using Semi-Unbalanced Optimal Transport

no code implementations28 Nov 2023 Quan Dao, Binh Ta, Tung Pham, Anh Tran

Diffusion models, a type of generative model, have demonstrated great potential for synthesizing highly detailed images.

Image Generation

Robust Contrastive Learning With Theory Guarantee

no code implementations16 Nov 2023 Ngoc N. Tran, Lam Tran, Hoang Phan, Anh Bui, Tung Pham, Toan Tran, Dinh Phung, Trung Le

Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.

Contrastive Learning

Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks

no code implementations1 Oct 2023 Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan

Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.

Sharpness & Shift-Aware Self-Supervised Learning

no code implementations17 May 2023 Ngoc N. Tran, Son Duong, Hoang Phan, Tung Pham, Dinh Phung, Trung Le

Self-supervised learning aims to extract meaningful features from unlabeled data for further downstream tasks.

Classification Contrastive Learning +2

Entropic Gromov-Wasserstein between Gaussian Distributions

no code implementations24 Aug 2021 Khang Le, Dung Le, Huy Nguyen, Dat Do, Tung Pham, Nhat Ho

When the metric is the inner product, which we refer to as inner product Gromov-Wasserstein (IGW), we demonstrate that the optimal transportation plans of entropic IGW and its unbalanced variant are (unbalanced) Gaussian distributions.

Improving Mini-batch Optimal Transport via Partial Transportation

2 code implementations22 Aug 2021 Khai Nguyen, Dang Nguyen, The-Anh Vu-Le, Tung Pham, Nhat Ho

Mini-batch optimal transport (m-OT) has been widely used recently to deal with the memory issue of OT in large-scale applications.

Partial Domain Adaptation

On Multimarginal Partial Optimal Transport: Equivalent Forms and Computational Complexity

no code implementations18 Aug 2021 Khang Le, Huy Nguyen, Tung Pham, Nhat Ho

We demonstrate that the ApproxMPOT algorithm can approximate the optimal value of multimarginal POT problem with a computational complexity upper bound of the order $\tilde{\mathcal{O}}(m^3(n+1)^{m}/ \varepsilon^2)$ where $\varepsilon > 0$ stands for the desired tolerance.

On Robust Optimal Transport: Computational Complexity and Barycenter Computation

no code implementations NeurIPS 2021 Khang Le, Huy Nguyen, Quang Nguyen, Tung Pham, Hung Bui, Nhat Ho

We consider robust variants of the standard optimal transport, named robust optimal transport, where marginal constraints are relaxed via Kullback-Leibler divergence.

On Transportation of Mini-batches: A Hierarchical Approach

2 code implementations11 Feb 2021 Khai Nguyen, Dang Nguyen, Quoc Nguyen, Tung Pham, Hung Bui, Dinh Phung, Trung Le, Nhat Ho

To address these problems, we propose a novel mini-batch scheme for optimal transport, named Batch of Mini-batches Optimal Transport (BoMb-OT), that finds the optimal coupling between mini-batches and it can be seen as an approximation to a well-defined distance on the space of probability measures.

Domain Adaptation

Point-set Distances for Learning Representations of 3D Point Clouds

1 code implementation ICCV 2021 Trung Nguyen, Quang-Hieu Pham, Tam Le, Tung Pham, Nhat Ho, Binh-Son Hua

From this study, we propose to use sliced Wasserstein distance and its variants for learning representations of 3D point clouds.

Point Cloud Registration Transfer Learning

Improving Relational Regularized Autoencoders with Spherical Sliced Fused Gromov Wasserstein

2 code implementations ICLR 2021 Khai Nguyen, Son Nguyen, Nhat Ho, Tung Pham, Hung Bui

To improve the discrepancy and consequently the relational regularization, we propose a new relational discrepancy, named spherical sliced fused Gromov Wasserstein (SSFG), that can find an important area of projections characterized by a von Mises-Fisher distribution.

Image Generation

Distributional Sliced-Wasserstein and Applications to Generative Modeling

1 code implementation ICLR 2021 Khai Nguyen, Nhat Ho, Tung Pham, Hung Bui

Sliced-Wasserstein distance (SW) and its variant, Max Sliced-Wasserstein distance (Max-SW), have been used widely in the recent years due to their fast computation and scalability even when the probability measures lie in a very high dimensional space.

Informativeness

On Unbalanced Optimal Transport: An Analysis of Sinkhorn Algorithm

1 code implementation ICML 2020 Khiem Pham, Khang Le, Nhat Ho, Tung Pham, Hung Bui

We provide a computational complexity analysis for the Sinkhorn algorithm that solves the entropic regularized Unbalanced Optimal Transport (UOT) problem between two measures of possibly different masses with at most $n$ components.

Scalable Support Vector Clustering Using Budget

no code implementations19 Sep 2017 Tung Pham, Trung Le, Hang Dang

In this paper, we propose applying Stochastic Gradient Descent (SGD) framework to the first phase of support-based clustering for finding the domain of novelty and a new strategy to perform the clustering assignment.

Clustering Outlier Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.