Search Results for author: Xuechen Zhang

Found 11 papers, 5 papers with code

TREACLE: Thrifty Reasoning via Context-Aware LLM and Prompt Selection

no code implementations17 Apr 2024 Xuechen Zhang, Zijian Huang, Ege Onur Taga, Carlee Joe-Wong, Samet Oymak, Jiasi Chen

Recent successes in natural language processing have led to the proliferation of large language models (LLMs) by multiple providers.

GSM8K Navigate

Class-attribute Priors: Adapting Optimization to Heterogeneity and Fairness Objective

no code implementations25 Jan 2024 Xuechen Zhang, Mingchen Li, Jiasi Chen, Christos Thrampoulidis, Samet Oymak

Confirming this, under a gaussian mixture setting, we show that the optimal SVM classifier for balanced accuracy needs to be adaptive to the class attributes.

Attribute Fairness

FedYolo: Augmenting Federated Learning with Pretrained Transformers

no code implementations10 Jul 2023 Xuechen Zhang, Mingchen Li, Xiangyu Chang, Jiasi Chen, Amit K. Roy-Chowdhury, Ananda Theertha Suresh, Samet Oymak

These insights on scale and modularity motivate a new federated learning approach we call "You Only Load Once" (FedYolo): The clients load a full PTF model once and all future updates are accomplished through communication-efficient modules with limited catastrophic-forgetting, where each task is assigned to its own module.

Federated Learning

Max-Margin Token Selection in Attention Mechanism

1 code implementation NeurIPS 2023 Davoud Ataee Tarzanagh, Yingcong Li, Xuechen Zhang, Samet Oymak

Interestingly, the SVM formulation of $\boldsymbol{p}$ is influenced by the support vector geometry of $\boldsymbol{v}$.

Learning on Manifolds: Universal Approximations Properties using Geometric Controllability Conditions for Neural ODEs

no code implementations15 May 2023 Karthik Elamvazhuthi, Xuechen Zhang, Samet Oymak, Fabio Pasqualetti

To address this shortcoming, in this paper we study a class of neural ordinary differential equations that, by design, leave a given manifold invariant, and characterize their properties by leveraging the controllability properties of control affine systems.

AutoBalance: Optimized Loss Functions for Imbalanced Data

1 code implementation NeurIPS 2021 Mingchen Li, Xuechen Zhang, Christos Thrampoulidis, Jiasi Chen, Samet Oymak

Our experimental findings are complemented with theoretical insights on loss function design and the benefits of train-validation split.

Data Augmentation Fairness

Post-hoc Models for Performance Estimation of Machine Learning Inference

no code implementations6 Oct 2021 Xuechen Zhang, Samet Oymak, Jiasi Chen

Estimating how well a machine learning model performs during inference is critical in a variety of scenarios (for example, to quantify uncertainty, or to choose from a library of available models).

BIG-bench Machine Learning Feature Engineering +3

LCSCNet: Linear Compressing Based Skip-Connecting Network for Image Super-Resolution

1 code implementation9 Sep 2019 Wenming Yang, Xuechen Zhang, Yapeng Tian, Wei Wang, Jing-Hao Xue, Qingmin Liao

In this paper, we develop a concise but efficient network architecture called linear compressing based skip-connecting network (LCSCNet) for image super-resolution.

Image Super-Resolution

Lightweight Feature Fusion Network for Single Image Super-Resolution

2 code implementations15 Feb 2019 Wenming Yang, Wei Wang, Xuechen Zhang, Shuifa Sun, Qingmin Liao

Specifically, a spindle block is composed of a dimension extension unit, a feature exploration unit and a feature refinement unit.

Image Super-Resolution

Deep Learning for Single Image Super-Resolution: A Brief Review

1 code implementation9 Aug 2018 Wenming Yang, Xuechen Zhang, Yapeng Tian, Wei Wang, Jing-Hao Xue

Single image super-resolution (SISR) is a notoriously challenging ill-posed problem, which aims to obtain a high-resolution (HR) output from one of its low-resolution (LR) versions.

Efficient Neural Network Image Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.