no code implementations • 15 Mar 2024 • Haoyang Liu, Aditya Singh, Yijiang Li, Haohan Wang
In this work, we provide a finetuning approach to enhance the robustness of vision transformers inspired by the concept of nullspace from linear algebra.
no code implementations • 15 Mar 2024 • Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang
Extensive empirical experiments suggest that our method not only outperforms standard adversarial training on both accuracy and robustness with less computation overhead but is also capable of generating robust distilled datasets that can withstand various adversarial attacks.
no code implementations • 15 Feb 2024 • Haoyang Liu, Yijiang Li, Jinglin Jian, Yuxuan Cheng, Jianrong Lu, Shuyi Guo, Jinglei Zhu, Mianchen Zhang, Miantong Zhang, Haohan Wang
For instance, it has facilitated the identification of disease-predictive genes from gene expression data, significantly advancing healthcare.
no code implementations • 4 Feb 2024 • Huanshuo Dong, Hong Wang, Haoyang Liu, Jian Luo, Jie Wang
It applies differential operators on these solutions, a process we call 'operator action', to efficiently generate precise PDE data points.
no code implementations • 11 Jan 2024 • Xijun Li, Fangzhou Zhu, Hui-Ling Zhen, Weilin Luo, Meng Lu, Yimin Huang, Zhenan Fan, Zirui Zhou, Yufei Kuang, Zhihai Wang, Zijie Geng, Yang Li, Haoyang Liu, Zhiwu An, Muming Yang, Jianshu Li, Jie Wang, Junchi Yan, Defeng Sun, Tao Zhong, Yong Zhang, Jia Zeng, Mingxuan Yuan, Jianye Hao, Jun Yao, Kun Mao
To this end, we present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI Solver, which aims to mitigate the scarcity of real-world mathematical programming instances, and to surpass the capabilities of traditional optimization techniques.
no code implementations • 30 Nov 2023 • Haoyang Liu, Yijiang Li, Tiancheng Xing, Vibhu Dalal, Luwei Li, Jingrui He, Haohan Wang
Dataset Distillation (DD) emerges as a powerful strategy to encapsulate the expansive information of large datasets into significantly smaller, synthetic equivalents, thereby preserving model performance with reduced computational overhead.
no code implementations • 27 Nov 2023 • Tong Zhang, Haoyang Liu, Peiyan Zhang, Yuxuan Cheng, Haohan Wang
Our method focuses on producing SVGs that are both accurate and simple, aligning with human readability and understanding.
no code implementations • 22 Oct 2023 • Haoyang Liu, Yufei Kuang, Jie Wang, Xijun Li, Yongdong Zhang, Feng Wu
To tackle this problem, we propose a novel approach, which is called Adversarial Instance Augmentation and does not require to know the problem type for new instance generation, to promote data diversity for learning-based branching modules in the branch-and-bound (B&B) Solvers (AdaSolver).
1 code implementation • 30 Sep 2023 • Lin Liu, Xinxin Fan, Haoyang Liu, Chulong Zhang, Weibin Kong, Jingjing Dai, Yuming Jiang, Yaoqin Xie, Xiaokun Liang
Rigid pre-registration involving local-global matching or other large deformation scenarios is crucial.
no code implementations • 21 Aug 2023 • Peiyan Zhang, Haoyang Liu, Chaozhuo Li, Xing Xie, Sunghun Kim, Haohan Wang
Machine learning has demonstrated remarkable performance over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model's performance in the real world is still in discussion.
no code implementations • 31 Jul 2023 • Haoyang Liu, Maheep Chaudhary, Haohan Wang
Accordingly, this survey presents the background of trustworthy machine learning development using a unified set of concepts, connects this language to Pearl's causal hierarchy, and finally discusses methods explicitly inspired by causality literature.
no code implementations • 19 May 2022 • Haoyang Liu, Xiantao Xiao, Liwei Zhang
Furthermore, we extend MALM to deal with time-varying functional constrained OCO with delayed feedback, in which the feedback information of loss and constraint functions is revealed to decision maker with delays.
1 code implementation • SEMEVAL 2021 • Haoyang Liu, M. Janina Sarol, Halil Kilicoglu
We propose a cascade of neural models that performs sentence classification, phrase recognition, and triple extraction to automatically structure the scholarly contributions of NLP publications.
1 code implementation • 12 May 2021 • Haoyang Liu, M. Janina Sarol, Halil Kilicoglu
We propose a cascade of neural models that performs sentence classification, phrase recognition, and triple extraction to automatically structure the scholarly contributions of NLP publications.
no code implementations • 13 May 2019 • Haoyang Liu
In this paper, we consider the soft-margin SVM used on data points with independent features, where the sample size $n$ and the feature dimension $p$ grows to $\infty$ in a fixed ratio $p/n\rightarrow \delta$.
no code implementations • 2 Dec 2018 • Wooseok Ha, Haoyang Liu, Rina Foygel Barber
Two common approaches in low-rank optimization problems are either working directly with a rank constraint on the matrix variable, or optimizing over a low-rank factorization so that the rank constraint is implicitly ensured.
Optimization and Control
no code implementations • 24 Apr 2018 • Haoyang Liu, Rina Foygel Barber
Instead, a general class of thresholding operators, lying between hard thresholding and soft thresholding, is shown to be optimal with the strongest possible convergence guarantee among all thresholding operators.