Search Results for author: Tian Yu Liu

Found 15 papers, 6 papers with code

Meaning Representations from Trajectories in Autoregressive Models

1 code implementation23 Oct 2023 Tian Yu Liu, Matthew Trager, Alessandro Achille, Pramuditha Perera, Luca Zancato, Stefano Soatto

We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.

Semantic Similarity Semantic Textual Similarity

AugUndo: Scaling Up Augmentations for Unsupervised Depth Completion

no code implementations15 Oct 2023 Yangchao Wu, Tian Yu Liu, Hyoungseob Park, Stefano Soatto, Dong Lao, Alex Wong

The sparse depth modality have seen even less as intensity transformations alter the scale of the 3D scene, and geometric transformations may decimate the sparse points during resampling.

Data Augmentation Depth Completion +1

Sub-token ViT Embedding via Stochastic Resonance Transformers

no code implementations6 Oct 2023 Dong Lao, Yangchao Wu, Tian Yu Liu, Alex Wong, Stefano Soatto

We term our method ``Stochastic Resonance Transformer" (SRT), which we show can effectively super-resolve features of pre-trained ViTs, capturing more of the local fine-grained structures that might otherwise be neglected as a result of tokenization.

Depth Estimation Depth Prediction +6

Tangent Transformers for Composition, Privacy and Removal

no code implementations16 Jul 2023 Tian Yu Liu, Aditya Golatkar, Stefano Soatto

We introduce Tangent Attention Fine-Tuning (TAFT), a method for fine-tuning linearized transformers obtained by computing a First-order Taylor Expansion around a pre-trained initialization.

Machine Unlearning

Tangent Model Composition for Ensembling and Continual Fine-tuning

no code implementations ICCV 2023 Tian Yu Liu, Stefano Soatto

Component models are composed at inference time via scalar combination, reducing the cost of ensembling to that of a single model.

Incremental Learning

Taming AI Bots: Controllability of Neural States in Large Language Models

no code implementations29 May 2023 Stefano Soatto, Paulo Tabuada, Pratik Chaudhari, Tian Yu Liu

We then characterize the subset of meanings that can be reached by the state of the LLMs for some input prompt, and show that a well-trained bot can reach any meaning albeit with small probability.

Train/Test-Time Adaptation with Retrieval

no code implementations CVPR 2023 Luca Zancato, Alessandro Achille, Tian Yu Liu, Matthew Trager, Pramuditha Perera, Stefano Soatto

Second, we apply ${\rm T^3AR}$ for test-time adaptation and show that exploiting a pool of external images at test-time leads to more robust representations over existing methods on DomainNet-126 and VISDA-C, especially when few adaptation data are available (up to 8%).

Retrieval Test-time Adaptation

Integral Continual Learning Along the Tangent Vector Field of Tasks

no code implementations23 Nov 2022 Tian Yu Liu, Aditya Golatkar, Stefano Soatto, Alessandro Achille

We propose a lightweight continual learning method which incorporates information from specialized datasets incrementally, by integrating it along the vector field of "generalist" models.

Continual Learning

Not All Poisons are Created Equal: Robust Training against Data Poisoning

2 code implementations18 Oct 2022 Yu Yang, Tian Yu Liu, Baharan Mirzasoleiman

Data poisoning causes misclassification of test time target examples by injecting maliciously crafted samples in the training data.

Data Poisoning

Data-Efficient Augmentation for Training Neural Networks

1 code implementation15 Oct 2022 Tian Yu Liu, Baharan Mirzasoleiman

To address this, we propose a rigorous technique to select subsets of data points that when augmented, closely capture the training dynamics of full data augmentation.

Data Augmentation

Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks

1 code implementation14 Aug 2022 Tian Yu Liu, Yu Yang, Baharan Mirzasoleiman

We make the key observation that attacks introduce local sharp regions of high training loss, which when minimized, results in learning the adversarial perturbations and makes the attack successful.

Data Poisoning

Monitored Distillation for Positive Congruent Depth Completion

1 code implementation30 Mar 2022 Tian Yu Liu, Parth Agrawal, Allison Chen, Byung-Woo Hong, Alex Wong

In the absence of ground truth for model selection and training, our method, termed Monitored Distillation, allows a student to exploit a blind ensemble of teachers by selectively learning from predictions that best minimize the reconstruction error for a given image.

Depth Completion Image Reconstruction +2

Triplet Contrastive Learning for Brain Tumor Classification

no code implementations8 Aug 2021 Tian Yu Liu, Jiashi Feng

Brain tumor is a common and fatal form of cancer which affects both adults and children.

Classification Contrastive Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.