1 code implementation • 1 Dec 2023 • Cuong N. Nguyen, Phong Tran, Lam Si Tung Ho, Vu Dinh, Anh T. Tran, Tal Hassner, Cuong V. Nguyen
We consider transferability estimation, the problem of estimating how well deep learning models transfer from a source to a target task.
no code implementations • 13 Sep 2022 • Cuong N. Nguyen, Lam Si Tung Ho, Vu Dinh, Tal Hassner, Cuong V. Nguyen
We analyze new generalization bounds for deep learning models trained by transfer learning from a source to a target task.
1 code implementation • 26 Jul 2022 • Nhat L. Vu, Thanh P. Nguyen, Binh T. Nguyen, Vu Dinh, Lam Si Tung Ho
Reconstructing the ancestral state of a group of species helps answer many important questions in evolutionary biology.
no code implementations • 19 Nov 2021 • Lam Si Tung Ho, Binh T. Nguyen, Vu Dinh, Duy Nguyen
We prove that under the multi-scale Bernstein's condition, the generalized posterior distribution concentrates around the set of optimal hypotheses and the generalized Bayes estimator can achieve fast learning rate.
no code implementations • 14 Nov 2021 • Lam Si Tung Ho, Vu Dinh
Notably, we show that for a sequence of nested trees with bounded heights, the necessary and sufficient conditions for the existence of a consistent ancestral state reconstruction method under discrete models, the Brownian motion model, and the threshold model are equivalent.
no code implementations • 27 Sep 2021 • Lam Si Tung Ho, Vu Dinh
Large neural network models have high predictive power but may suffer from overfitting if the training set is not large enough.
no code implementations • 31 May 2021 • Binh T. Nguyen, Duy M. Nguyen, Lam Si Tung Ho, Vu Dinh
In this work, we introduce a novel method for solving the set inversion problem by formulating it as a binary classification problem.
no code implementations • 4 May 2021 • Lam Si Tung Ho, Vu Dinh
Supertree methods are tree reconstruction techniques that combine several smaller gene trees (possibly on different sets of species) to build a larger species tree.
1 code implementation • NeurIPS 2020 • Vu Dinh, Lam Si Tung Ho
One of the most important steps toward interpretability and explainability of neural network models is feature selection, which aims to identify the subset of relevant features.
no code implementations • 30 May 2020 • Vu Dinh, Lam Si Tung Ho
In this work, we propose and establish a theoretical guarantee for the use of the adaptive group lasso for selecting important features of neural networks.
no code implementations • 4 Jun 2019 • Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Vu Dinh, Binh Nguyen
We study pool-based active learning with abstention feedbacks where a labeler can abstain from labeling a queried example with some unknown abstention rate.
1 code implementation • 28 May 2018 • Cheng Zhang, Vu Dinh, Frederick A. Matsen IV
Phylogenetic tree inference using deep DNA sequencing is reshaping our understanding of rapidly evolving systems, such as the within-host battle between viruses and the immune system.
no code implementations • 23 May 2017 • Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Vu Dinh, Binh Nguyen
We study pool-based active learning with abstention feedbacks, where a labeler can abstain from labeling a queried example with some unknown abstention rate.
3 code implementations • ICML 2017 • Vu Dinh, Arman Bilge, Cheng Zhang, Frederick A. Matsen IV
Hamiltonian Monte Carlo (HMC) is an efficient and effective means of sampling posterior distributions on Euclidean space, which has been extended to manifolds with boundary.
no code implementations • NeurIPS 2016 • Vu Dinh, Lam Si Tung Ho, Duy Nguyen, Binh T. Nguyen
We study fast learning rates when the losses are not necessarily bounded and may have a distribution with heavy tails.
no code implementations • 12 Aug 2014 • Vu Dinh, Lam Si Tung Ho, Nguyen Viet Cuong, Duy Nguyen, Binh T. Nguyen
We prove new fast learning rates for the one-vs-all multiclass plug-in classifiers trained either from exponentially strongly mixing data or from data generated by a converging drifting distribution.
no code implementations • 12 Jun 2014 • Nguyen Viet Cuong, Lam Si Tung Ho, Vu Dinh
For the generalization of the algorithm, we prove a PAC-style bound on the training sample size for the expected $L_1$-loss to converge to the optimal loss when training data are V-geometrically ergodic Markov chains.