Search Results for author: Tiejin Chen

Found 7 papers, 2 papers with code

Are Classification Robustness and Explanation Robustness Really Strongly Correlated? An Analysis Through Input Loss Landscape

no code implementations9 Mar 2024 Tiejin Chen, Wenwang Huang, Linsey Pang, Dongsheng Luo, Hua Wei

This paper delves into the critical area of deep learning robustness, challenging the conventional belief that classification robustness and explanation robustness in image classification systems are inherently correlated.

Classification Image Classification

Privacy-preserving Fine-tuning of Large Language Models through Flatness

no code implementations7 Mar 2024 Tiejin Chen, Longchao Da, Huixue Zhou, Pingzhi Li, Kaixiong Zhou, Tianlong Chen, Hua Wei

The privacy concerns associated with the use of Large Language Models (LLMs) have grown recently with the development of LLMs such as ChatGPT.

Knowledge Distillation Privacy Preserving +3

When eBPF Meets Machine Learning: On-the-fly OS Kernel Compartmentalization

no code implementations11 Jan 2024 Zicheng Wang, Tiejin Chen, Qinrun Dai, Yueqi Chen, Hua Wei, Qingkai Zeng

Compartmentalization effectively prevents initial corruption from turning into a successful attack.

Uncertainty Regularized Evidential Regression

1 code implementation3 Jan 2024 Kai Ye, Tiejin Chen, Hua Wei, Liang Zhan

The Evidential Regression Network (ERN) represents a novel approach that integrates deep learning with Dempster-Shafer's theory to predict a target and quantify the associated uncertainty.

regression

Open-TI: Open Traffic Intelligence with Augmented Language Model

1 code implementation30 Dec 2023 Longchao Da, Kuanru Liou, Tiejin Chen, Xuesong Zhou, Xiangyong Luo, Yezhou Yang, Hua Wei

Transportation has greatly benefited the cities' development in the modern civilization process.

Language Modelling

Federated Learning with Projected Trajectory Regularization

no code implementations22 Dec 2023 Tiejin Chen, Yuanpu Cao, Yujia Wang, Cho-Jui Hsieh, Jinghui Chen

Specifically, FedPTR allows local clients or the server to optimize an auxiliary (synthetic) dataset that mimics the learning dynamics of the recent model update and utilizes it to project the next-step model trajectory for local training regularization.

Federated Learning

Learning Sparsity and Randomness for Data-driven Low Rank Approximation

no code implementations15 Dec 2022 Tiejin Chen, Yicheng Tao

With the learned value and fixed non-zero positions for sketch matrices from learning-based algorithms, these matrices can reduce the test error of low rank approximation significantly.

Cannot find the paper you are looking for? You can Submit a new open access paper.