Search Results for author: Zeke Xie

Found 15 papers, 6 papers with code

SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior

no code implementations29 Mar 2024 Zhongrui Yu, Haoran Wang, Jinze Yang, Hanzhang Wang, Zeke Xie, Yunfeng Cai, Jiale Cao, Zhong Ji, Mingming Sun

To tackle this problem, we propose a novel approach that enhances the capacity of 3DGS by leveraging prior from a Diffusion Model along with complementary multi-modal data.

Autonomous Driving Neural Rendering +1

Neural Field Classifiers via Target Encoding and Classification Loss

no code implementations2 Mar 2024 Xindi Yang, Zeke Xie, Xiong Zhou, Boyu Liu, Buhua Liu, Yi Liu, Haoran Wang, Yunfeng Cai, Mingming Sun

We successfully propose a novel Neural Field Classifier (NFC) framework which formulates existing neural field methods as classification tasks rather than regression tasks.

Classification Multi-Label Classification +2

HiCAST: Highly Customized Arbitrary Style Transfer with Adapter Enhanced Diffusion Models

no code implementations11 Jan 2024 Hanzhang Wang, Haoran Wang, Jinze Yang, Zhongrui Yu, Zeke Xie, Lei Tian, Xinyan Xiao, Junjun Jiang, Xianming Liu, Mingming Sun

In the specific, our model is constructed based on Latent Diffusion Model (LDM) and elaborately designed to absorb content and style instance as conditions of LDM.

Style Transfer

S3IM: Stochastic Structural SIMilarity and Its Unreasonable Effectiveness for Neural Fields

1 code implementation ICCV 2023 Zeke Xie, Xindi Yang, Yujie Yang, Qi Sun, Yixiang Jiang, Haoran Wang, Yunfeng Cai, Mingming Sun

Recently, Neural Radiance Field (NeRF) has shown great success in rendering novel-view images of a given scene by learning an implicit representation with only posed RGB images.

Novel View Synthesis Surface Reconstruction

Sparse Double Descent: Where Network Pruning Aggravates Overfitting

1 code implementation17 Jun 2022 Zheng He, Zeke Xie, Quanzhi Zhu, Zengchang Qin

People usually believe that network pruning not only reduces the computational cost of deep networks, but also prevents overfitting by decreasing model capacity.

Network Pruning

Dataset Pruning: Reducing Training Data by Examining Generalization Influence

no code implementations19 May 2022 Shuo Yang, Zeke Xie, Hanyu Peng, Min Xu, Mingming Sun, Ping Li

To answer these, we propose dataset pruning, an optimization-based sample selection method that can (1) examine the influence of removing a particular set of training samples on model's generalization ability with theoretical guarantee, and (2) construct the smallest subset of training data that yields strictly constrained generalization gap.

On the Power-Law Hessian Spectrums in Deep Learning

no code implementations31 Jan 2022 Zeke Xie, Qian-Yuan Tang, Yunfeng Cai, Mingming Sun, Ping Li

It is well-known that the Hessian of deep loss landscape matters to optimization, generalization, and even robustness of deep learning.

Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum

no code implementations29 Sep 2021 Zeke Xie, Xinrui Wang, Huishuai Zhang, Issei Sato, Masashi Sugiyama

Specifically, we disentangle the effects of Adaptive Learning Rate and Momentum of the Adam dynamics on saddle-point escaping and flat minima selection.

Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization

1 code implementation31 Mar 2021 Zeke Xie, Li Yuan, Zhanxing Zhu, Masashi Sugiyama

It is well-known that stochastic gradient noise (SGN) acts as implicit regularization for deep learning and is essentially important for both optimization and generalization of deep networks.

On the Overlooked Pitfalls of Weight Decay and How to Mitigate Them: A Gradient-Norm Perspective

1 code implementation NeurIPS 2023 Zeke Xie, Zhiqiang Xu, Jingzhao Zhang, Issei Sato, Masashi Sugiyama

Weight decay is a simple yet powerful regularization technique that has been very widely used in training of deep neural networks (DNNs).

Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting

1 code implementation12 Nov 2020 Zeke Xie, Fengxiang He, Shaopeng Fu, Issei Sato, DaCheng Tao, Masashi Sugiyama

Thus it motivates us to design a similar mechanism named {\it artificial neural variability} (ANV), which helps artificial neural networks learn some advantages from ``natural'' neural networks.

Memorization

Stable Weight Decay Regularization

no code implementations28 Sep 2020 Zeke Xie, Issei Sato, Masashi Sugiyama

\citet{loshchilov2018decoupled} demonstrated that $L_{2}$ regularization is not identical to weight decay for adaptive gradient methods, such as Adaptive Momentum Estimation (Adam), and proposed Adam with Decoupled Weight Decay (AdamW).

Adai: Separating the Effects of Adaptive Learning Rate and Momentum Inertia

1 code implementation29 Jun 2020 Zeke Xie, Xinrui Wang, Huishuai Zhang, Issei Sato, Masashi Sugiyama

Specifically, we disentangle the effects of Adaptive Learning Rate and Momentum of the Adam dynamics on saddle-point escaping and minima selection.

A Quantum-Inspired Ensemble Method and Quantum-Inspired Forest Regressors

no code implementations22 Nov 2017 Zeke Xie, Issei Sato

The contribution of this work is two-fold, a novel ensemble regression algorithm inspired by quantum mechanics and the theoretical connection between quantum interpretations and machine learning algorithms.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.