no code implementations • 19 Mar 2024 • Anh Bui, Vy Vo, Tung Pham, Dinh Phung, Trung Le
There has long been plenty of theoretical and empirical evidence supporting the success of ensemble learning.
no code implementations • 18 Mar 2024 • Anh Bui, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung
Generative models have demonstrated remarkable potential in generating visually impressive content from textual descriptions.
no code implementations • 16 Nov 2023 • Ngoc N. Tran, Lam Tran, Hoang Phan, Anh Bui, Tung Pham, Toan Tran, Dinh Phung, Trung Le
Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.
1 code implementation • 26 Apr 2023 • Anh Bui, Trung Le, He Zhao, Quan Tran, Paul Montague, Dinh Phung
The key factor for the success of adversarial training is the capability to generate qualified and divergent adversarial examples which satisfy some objectives/goals (e. g., finding adversarial examples that maximize the model losses for simultaneously attacking multiple models).
1 code implementation • 25 Jan 2021 • Anh Bui, Trung Le, He Zhao, Paul Montague, Seyit Camtepe, Dinh Phung
Central to this approach is the selection of positive (similar) and negative (dissimilar) sets to provide the model the opportunity to `contrast' between data and class representation in the latent space.
1 code implementation • 21 Sep 2020 • Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung
An important technique of this approach is to control the transferability of adversarial examples among ensemble members.
1 code implementation • ECCV 2020 • Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung
The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application.