no code implementations • 17 Apr 2024 • Xuechen Zhang, Zijian Huang, Ege Onur Taga, Carlee Joe-Wong, Samet Oymak, Jiasi Chen
Recent successes in natural language processing have led to the proliferation of large language models (LLMs) by multiple providers.
no code implementations • 25 Jan 2024 • Xuechen Zhang, Mingchen Li, Jiasi Chen, Christos Thrampoulidis, Samet Oymak
Confirming this, under a gaussian mixture setting, we show that the optimal SVM classifier for balanced accuracy needs to be adaptive to the class attributes.
no code implementations • 10 Jul 2023 • Xuechen Zhang, Mingchen Li, Xiangyu Chang, Jiasi Chen, Amit K. Roy-Chowdhury, Ananda Theertha Suresh, Samet Oymak
These insights on scale and modularity motivate a new federated learning approach we call "You Only Load Once" (FedYolo): The clients load a full PTF model once and all future updates are accomplished through communication-efficient modules with limited catastrophic-forgetting, where each task is assigned to its own module.
1 code implementation • NeurIPS 2023 • Davoud Ataee Tarzanagh, Yingcong Li, Xuechen Zhang, Samet Oymak
Interestingly, the SVM formulation of $\boldsymbol{p}$ is influenced by the support vector geometry of $\boldsymbol{v}$.
no code implementations • 15 May 2023 • Karthik Elamvazhuthi, Xuechen Zhang, Samet Oymak, Fabio Pasqualetti
To address this shortcoming, in this paper we study a class of neural ordinary differential equations that, by design, leave a given manifold invariant, and characterize their properties by leveraging the controllability properties of control affine systems.
1 code implementation • NeurIPS 2021 • Mingchen Li, Xuechen Zhang, Christos Thrampoulidis, Jiasi Chen, Samet Oymak
Our experimental findings are complemented with theoretical insights on loss function design and the benefits of train-validation split.
no code implementations • 6 Oct 2021 • Xuechen Zhang, Samet Oymak, Jiasi Chen
Estimating how well a machine learning model performs during inference is critical in a variety of scenarios (for example, to quantify uncertainty, or to choose from a library of available models).
no code implementations • 18 May 2020 • Yang Chen, Zongqing Lu, Xuechen Zhang, Lei Chen, Qingmin Liao
Recent end-to-end deep neural networks for disparity regression have achieved the state-of-the-art performance.
1 code implementation • 9 Sep 2019 • Wenming Yang, Xuechen Zhang, Yapeng Tian, Wei Wang, Jing-Hao Xue, Qingmin Liao
In this paper, we develop a concise but efficient network architecture called linear compressing based skip-connecting network (LCSCNet) for image super-resolution.
Ranked #14 on Image Super-Resolution on Set14 - 3x upscaling
2 code implementations • 15 Feb 2019 • Wenming Yang, Wei Wang, Xuechen Zhang, Shuifa Sun, Qingmin Liao
Specifically, a spindle block is composed of a dimension extension unit, a feature exploration unit and a feature refinement unit.
Ranked #11 on Image Super-Resolution on Manga109 - 3x upscaling
1 code implementation • 9 Aug 2018 • Wenming Yang, Xuechen Zhang, Yapeng Tian, Wei Wang, Jing-Hao Xue
Single image super-resolution (SISR) is a notoriously challenging ill-posed problem, which aims to obtain a high-resolution (HR) output from one of its low-resolution (LR) versions.