Search Results for author: Shandian Zhe

Found 46 papers, 17 papers with code

Invertible Fourier Neural Operators for Tackling Both Forward and Inverse Problems

no code implementations18 Feb 2024 Da Long, Shandian Zhe

In this paper, we propose an invertible Fourier Neural Operator (iFNO) that tackles both the forward and inverse problems.

Operator learning

Standard Gaussian Process is All You Need for High-Dimensional Bayesian Optimization

1 code implementation5 Feb 2024 Zhitong Xu, Shandian Zhe

There has been a long-standing and widespread belief that Bayesian Optimization (BO) with standard Gaussian process (GP), referred to as standard BO, is ineffective in high-dimensional optimization problems.

Bayesian Optimization

Diffusion-Generative Multi-Fidelity Learning for Physical Simulation

no code implementations9 Nov 2023 Zheng Wang, Shibo Li, Shikai Fang, Shandian Zhe

We propose a conditional score model to control the solution generation by the input parameters and the fidelity.

Denoising

Solving High Frequency and Multi-Scale PDEs with Gaussian Processes

1 code implementation8 Nov 2023 Shikai Fang, Madison Cooley, Da Long, Shibo Li, Robert Kirby, Shandian Zhe

Machine learning based solvers have garnered much attention in physical simulation and scientific computing, with a prominent example, physics-informed neural networks (PINNs).

Computational Efficiency Gaussian Processes

Functional Bayesian Tucker Decomposition for Continuous-indexed Tensor Data

1 code implementation8 Nov 2023 Shikai Fang, Xin Yu, Zheng Wang, Shibo Li, Mike Kirby, Shandian Zhe

To generalize Tucker decomposition to such scenarios, we propose Functional Bayesian Tucker Decomposition (FunBaT).

Gaussian Processes

Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels

1 code implementation9 Oct 2023 Da Long, Wei W. Xing, Aditi S. Krishnapriyan, Robert M. Kirby, Shandian Zhe, Michael W. Mahoney

To overcome the computational challenge of kernel regression, we place the function values on a mesh and induce a Kronecker product construction, and we use tensor algebra to enable efficient computation and optimization.

regression Uncertainty Quantification

Multi-Resolution Active Learning of Fourier Neural Operators

1 code implementation29 Sep 2023 Shibo Li, Xin Yu, Wei Xing, Mike Kirby, Akil Narayan, Shandian Zhe

To overcome this problem, we propose Multi-Resolution Active learning of FNO (MRA-FNO), which can dynamically select the input functions and resolutions to lower the data cost as much as possible while optimizing the learning efficiency.

Active Learning LEMMA +2

BayOTIDE: Bayesian Online Multivariate Time series Imputation with functional decomposition

no code implementations28 Aug 2023 Shikai Fang, Qingsong Wen, Yingtao Luo, Shandian Zhe, Liang Sun

More importantly, almost all methods assume the observations are sampled at regular time stamps, and fail to handle complex irregular sampled time series arising from different applications.

Computational Efficiency Gaussian Processes +3

Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation

1 code implementation12 May 2023 Yu Chen, Wei Deng, Shikai Fang, Fengpei Li, Nicole Tianjiao Yang, Yikai Zhang, Kashif Rasul, Shandian Zhe, Anderson Schneider, Yuriy Nevmyvaka

We show that optimizing the transport cost improves the performance and the proposed algorithm achieves the state-of-the-art result in healthcare and environmental data while exhibiting the advantage of exploring both temporal and feature patterns in probabilistic time series imputation.

Imputation Time Series

Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions

no code implementations19 Jan 2023 Junyang Cai, Khai-Nguyen Nguyen, Nishant Shrestha, Aidan Good, Ruisen Tu, Xin Yu, Shandian Zhe, Thiago Serra

One surprising trait of neural networks is the extent to which their connections can be pruned with little to no effect on accuracy.

Network Pruning

Batch Multi-Fidelity Active Learning with Budget Constraints

no code implementations23 Oct 2022 Shibo Li, Jeff M. Phillips, Xin Yu, Robert M. Kirby, Shandian Zhe

However, this method only queries at one pair of fidelity and input at a time, and hence has a risk to bring in strongly correlated examples to reduce the learning efficiency.

Active Learning

A Kernel Approach for PDE Discovery and Operator Learning

no code implementations14 Oct 2022 Da Long, Nicole Mrvaljevic, Shandian Zhe, Bamdad Hosseini

This article presents a three-step framework for learning and solving partial differential equations (PDEs) using kernel methods.

Operator learning

Nonparametric Factor Trajectory Learning for Dynamic Tensor Decomposition

1 code implementation6 Jul 2022 Zheng Wang, Shandian Zhe

In practice, tensor data is often accompanied by temporal information, namely the time points when the entry values were generated.

Tensor Decomposition

Infinite-Fidelity Coregionalization for Physical Simulation

no code implementations1 Jul 2022 Shibo Li, Zheng Wang, Robert M. Kirby, Shandian Zhe

Our model can interpolate and/or extrapolate the predictions to novel fidelities, which can be even higher than the fidelities of training data.

Gaussian Processes

Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm

no code implementations7 Jun 2022 Aidan Good, Jiaqi Lin, Hannah Sieg, Mikey Ferguson, Xin Yu, Shandian Zhe, Jerzy Wieczorek, Thiago Serra

In this work, we study such relative distortions in recall by hypothesizing an intensification effect that is inherent to the model.

Network Pruning

The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks

1 code implementation9 Mar 2022 Xin Yu, Thiago Serra, Srikumar Ramalingam, Shandian Zhe

We propose a tractable heuristic for solving the combinatorial extension of OBS, in which we select weights for simultaneous removal, as well as a systematic update of the remaining weights.

Self-Adaptable Point Processes with Nonparametric Time Decays

no code implementations NeurIPS 2021 Zhimeng Pan, Zheng Wang, Jeff M. Phillips, Shandian Zhe

Specifically, we use an embedding to represent each event type and model the event influence as an unknown function of the embeddings and time span.

Point Processes

A Metalearning Approach for Physics-Informed Neural Networks (PINNs): Application to Parameterized PDEs

no code implementations26 Oct 2021 Michael Penwarden, Shandian Zhe, Akil Narayan, Robert M. Kirby

Physics-informed neural networks (PINNs) as a means of discretizing partial differential equations (PDEs) are garnering much attention in the Computational Science and Engineering (CS&E) world.

BIG-bench Machine Learning Physics-informed machine learning +1

Nonparametric Sparse Tensor Factorization with Hierarchical Gamma Processes

no code implementations19 Oct 2021 Conor Tillinghast, Zheng Wang, Shandian Zhe

Compared with the existent works, our model not only leverages the structural information underlying the observed entry indices, but also provides extra interpretability and flexibility -- it can simultaneously estimate a set of location factors about the intrinsic properties of the tensor nodes, and another set of sociability factors reflecting their extrovert activity in interacting with others; users are free to choose a trade-off between the two types of factors.

Meta-Learning with Adjoint Methods

no code implementations16 Oct 2021 Shibo Li, Zheng Wang, Akil Narayan, Robert Kirby, Shandian Zhe

the initialization, we only need to run the standard ODE solver twice -- one is forward in time that evolves a long trajectory of gradient flow for the sampled task; the other is backward and solves the adjoint ODE.

Meta-Learning

Characterizing possible failure modes in physics-informed neural networks

2 code implementations NeurIPS 2021 Aditi S. Krishnapriyan, Amir Gholami, Shandian Zhe, Robert M. Kirby, Michael W. Mahoney

We provide evidence that the soft regularization in PINNs, which involves PDE-based differential operators, can introduce a number of subtle problems, including making the problem more ill-conditioned.

Multifidelity Modeling for Physics-Informed Neural Networks (PINNs)

no code implementations25 Jun 2021 Michael Penwarden, Shandian Zhe, Akil Narayan, Robert M. Kirby

Candidates for this approach are simulation methodologies for which there are fidelity differences connected with significant computational cost differences.

Block-term Tensor Neural Networks

no code implementations10 Oct 2020 Jinmian Ye, Guangxi Li, Di Chen, Haiqin Yang, Shandian Zhe, Zenglin Xu

Deep neural networks (DNNs) have achieved outstanding performance in a wide range of applications, e. g., image classification, natural language processing, etc.

Image Classification

Streaming Probabilistic Deep Tensor Factorization

1 code implementation14 Jul 2020 Shikai Fang, Zheng Wang, Zhimeng Pan, Ji Liu, Shandian Zhe

Our algorithm provides responsive incremental updates for the posterior of the latent factors and NN weights upon receiving new tensor entries, and meanwhile select and inhibit redundant/useless weights.

Multi-Fidelity Bayesian Optimization via Deep Neural Networks

no code implementations NeurIPS 2020 Shibo Li, Wei Xing, Mike Kirby, Shandian Zhe

In many applications, the objective function can be evaluated at multiple fidelities to enable a trade-off between the cost and accuracy.

Bayesian Optimization

Multi-Fidelity High-Order Gaussian Processes for Physical Simulation

1 code implementation8 Jun 2020 Zheng Wang, Wei Xing, Robert Kirby, Shandian Zhe

To address these issues, we propose Multi-Fidelity High-Order Gaussian Process (MFHoGP) that can capture complex correlations both between the outputs and between the fidelities to enhance solution estimation, and scale to large numbers of outputs.

Gaussian Processes Vocal Bursts Intensity Prediction

Physics Informed Deep Kernel Learning

no code implementations8 Jun 2020 Zheng Wang, Wei Xing, Robert Kirby, Shandian Zhe

Deep kernel learning is a promising combination of deep neural networks and nonparametric function learning.

Gaussian Processes Uncertainty Quantification

Scalable Variational Gaussian Process Regression Networks

2 code implementations25 Mar 2020 Shibo Li, Wei Xing, Mike Kirby, Shandian Zhe

Gaussian process regression networks (GPRN) are powerful Bayesian models for multi-output regression, but their inference is intractable.

regression Variational Inference

Macroscopic Traffic Flow Modeling with Physics Regularized Gaussian Process: A New Insight into Machine Learning Applications

no code implementations6 Feb 2020 Yun Yuan, Xianfeng Terry Yang, Zhao Zhang, Shandian Zhe

To address this issue, this study presents a new modeling framework, named physics regularized machine learning (PRML), to encode classical traffic flow models (referred as physical models) into the ML architecture and to regularize the ML training process.

Bayesian Inference BIG-bench Machine Learning +1

Probabilistic Streaming Tensor Decomposition with Side Information

1 code implementation27 Nov 2019 Yimin Zheng, Shandian Zhe

Tensor decomposition is an essential tool to analyze high-order interactions in multiway data.

Tensor Decomposition

Conditional Expectation Propagation

no code implementations27 Oct 2019 Zheng Wang, Shandian Zhe

Expectation propagation (EP) is a powerful approximate inference algorithm.

Computational Efficiency

Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition

no code implementations CVPR 2018 Jinmian Ye, Linnan Wang, Guangxi Li, Di Chen, Shandian Zhe, Xinqi Chu, Zenglin Xu

On three challenging tasks, including Action Recognition in Videos, Image Captioning and Image Generation, BT-RNN outperforms TT-RNN and the standard RNN in terms of both prediction accuracy and convergence rate.

Action Recognition In Videos Image Captioning +3

DinTucker: Scaling up Gaussian process models on multidimensional arrays with billions of elements

no code implementations12 Nov 2013 Shandian Zhe, Yuan Qi, Youngja Park, Ian Molloy, Suresh Chari

To overcome this limitation, we present Distributed Infinite Tucker (DINTUCKER), a large-scale nonlinear tensor decomposition algorithm on MAPREDUCE.

Tensor Decomposition Variational Inference

Supervised Heterogeneous Multiview Learning for Joint Association Study and Disease Diagnosis

no code implementations26 Apr 2013 Shandian Zhe, Zenglin Xu, Yuan Qi

To unify these two tasks, we present a new sparse Bayesian approach for joint association study and disease diagnosis.

Multiview Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.