2 code implementations • 19 Dec 2023 • Viet Nguyen, Giang Vu, Tung Nguyen Thanh, Khoat Than, Toan Tran
To minimize that gap, we propose a novel \textit{sequence-aware} loss that aims to reduce the estimation gap to enhance the sampling quality.
no code implementations • 26 Nov 2023 • Quyen Tran, Lam Tran, Khoat Than, Toan Tran, Dinh Phung, Trung Le
Drawing inspiration from prompt tuning techniques applied to Large Language Models, recent methods based on pre-trained ViT networks have achieved remarkable results in the field of Continual Learning.
no code implementations • 16 Nov 2023 • Ngoc N. Tran, Lam Tran, Hoang Phan, Anh Bui, Tung Pham, Toan Tran, Dinh Phung, Trung Le
Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.
1 code implementation • 20 Oct 2023 • Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Toan Tran, Jaesik Choi
To mitigate such difficulties, we introduce SigFormer, a novel deep learning model that combines the power of path signatures and transformers to handle sequential data, particularly in cases with irregularities.
no code implementations • 29 May 2023 • Anh T Nguyen, Lam Tran, Anh Tong, Tuan-Duy H. Nguyen, Toan Tran
In this paper, we propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions, aiming at a more helpful representation for the classification task.
1 code implementation • ICCV 2023 • Tuong Do, Binh X. Nguyen, Vuong Pham, Toan Tran, Erman Tjiputra, Quang D. Tran, Anh Nguyen
In this paper, we present a new multigraph topology for cross-silo federated learning.
1 code implementation • 4 Jun 2022 • Hoang Phan, Ngoc Tran, Trung Le, Toan Tran, Nhat Ho, Dinh Phung
Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem.
no code implementations • ICLR 2022 • Hieu Vu, Toan Tran, Man-Chung Yue, Viet Anh Nguyen
Principal component analysis is a simple yet useful dimensionality reduction technique in modern machine learning pipelines.
no code implementations • NeurIPS 2021 • Trung Phung, Trung Le, Long Vuong, Toan Tran, Anh Tran, Hung Bui, Dinh Phung
Domain adaptation (DA) benefits from the rigorous theoretical works that study its insightful characteristics and various aspects, e. g., learning domain-invariant representations and its trade-off.
1 code implementation • NeurIPS 2021 • Manh-Ha Bui, Toan Tran, Anh Tuan Tran, Dinh Phung
We empirically show that mDSDI provides competitive results with state-of-the-art techniques in DG.
no code implementations • 29 Sep 2021 • Long Tung Vuong, Trung Quoc Phung, Toan Tran, Anh Tuan Tran, Dinh Phung, Trung Le
To achieve a satisfactory generalization performance on prediction tasks in an unseen domain, existing domain generalization (DG) approaches often rely on the strict assumption of fixed domain-invariant features and common hypotheses learned from a set of training domains.
1 code implementation • ICLR 2022 • A. Tuan Nguyen, Toan Tran, Yarin Gal, Philip H. S. Torr, Atılım Güneş Baydin
A common approach in the domain adaptation literature is to learn a representation of the input that has the same (marginal) distribution over the source and the target domain.
1 code implementation • NeurIPS 2021 • A. Tuan Nguyen, Toan Tran, Yarin Gal, Atılım Güneş Baydin
Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains.
no code implementations • 1 Jan 2021 • Toan Tran, Hieu Vu, Gustavo Carneiro, Hung Bui
Label noise is a natural event of data collection and annotation and has been shown to have significant impact on the performance of deep learning models regarding accuracy reduction and sample complexity increase.
no code implementations • 21 Dec 2020 • Anh Tong, Toan Tran, Hung Bui, Jaesik Choi
Choosing a proper set of kernel functions is an important problem in learning Gaussian Process (GP) models since each kernel structure has different model complexity and data fitness.
no code implementations • 26 Apr 2019 • Toan Tran, Thanh-Toan Do, Ian Reid, Gustavo Carneiro
Deep learning models have demonstrated outstanding performance in several problems, but their training process tends to require immense amounts of computational and human resources for training and labeling, constraining the types of problems that can be tackled.
no code implementations • CVPR 2019 • Thanh-Toan Do, Toan Tran, Ian Reid, Vijay Kumar, Tuan Hoang, Gustavo Carneiro
Another approach explored in the field relies on an ad-hoc linearization (in terms of N) of the triplet loss that introduces class centroids, which must be optimized using the whole training set for each mini-batch - this means that a naive implementation of this approach has run-time complexity O(N^2).
1 code implementation • NeurIPS 2017 • Toan Tran, Trung Pham, Gustavo Carneiro, Lyle Palmer, Ian Reid
Data augmentation is an essential part of the training process applied to deep learning models.