no code implementations • 25 Feb 2024 • Tam Nguyen, César A. Uribe, Tan M. Nguyen, Richard G. Baraniuk
Motivated by this control framework, we derive a novel class of transformers, PID-controlled Transformer (PIDformer), aimed at improving robustness and mitigating the rank-collapse issue inherent in softmax transformers.
no code implementations • 1 Dec 2023 • Tam Nguyen, Tan M. Nguyen, Richard G. Baraniuk
Transformers have achieved remarkable success in a wide range of natural language processing and computer vision applications.
no code implementations • 6 Nov 2023 • Tuan Nguyen, Tam Nguyen, Vinh Nguyen, Tan M. Nguyen
$p$-Laplacian regularization, rooted in graph and image signal processing, introduces a parameter $p$ to control the regularization effect on these data.
no code implementations • 1 Jun 2022 • Tan Nguyen, Minh Pham, Tam Nguyen, Khai Nguyen, Stanley J. Osher, Nhat Ho
Multi-head attention empowers the recent success of transformers, the state-of-the-art models that have achieved remarkable success in sequence modeling and beyond.
1 code implementation • 16 Oct 2021 • Tam Nguyen, Tan M. Nguyen, Dung D. Le, Duy Khuong Nguyen, Viet-Anh Tran, Richard G. Baraniuk, Nhat Ho, Stanley J. Osher
Inspired by this observation, we propose Transformer with a Mixture of Gaussian Keys (Transformer-MGK), a novel transformer architecture that replaces redundant heads in transformers with a mixture of keys at each head.
no code implementations • 22 Jul 2021 • Tam Nguyen, Raviv Raich
Due to the partial availability of bag-level labels, we focus on the incomplete-label MIML setting for the proposed active learning approach.
no code implementations • NeurIPS 2019 • Tam Nguyen, Maximilian Dax, Chaithanya Kumar Mummadi, Nhung Ngo, Thi Hoai Phuong Nguyen, Zhongyu Lou, Thomas Brox
Alternative unsupervised approaches rely on careful selection of multiple handcrafted saliency methods to generate noisy pseudo-ground-truth labels.