Search Results for author: Sayan Mukherjee

Found 27 papers, 9 papers with code

Scalable Bayesian inference for the generalized linear mixed model

no code implementations5 Mar 2024 Samuel I. Berchuck, Felipe A. Medeiros, Sayan Mukherjee, Andrea Agazzi

The generalized linear mixed model (GLMM) is a popular statistical approach for handling correlated data, and is used extensively in applications areas where big data is common, including biomedical data settings.

Bayesian Inference Uncertainty Quantification

Global Optimality of Elman-type RNN in the Mean-Field Regime

no code implementations12 Mar 2023 Andrea Agazzi, Jianfeng Lu, Sayan Mukherjee

We analyze Elman-type Recurrent Reural Networks (RNNs) and their training in the mean-field regime.

Vocal Bursts Type Prediction

Concentration inequalities and optimal number of layers for stochastic deep neural networks

no code implementations22 Jun 2022 Michele Caprio, Sayan Mukherjee

We state concentration inequalities for the output of the hidden layers of a stochastic deep neural network (SDNN), as well as for the output of the whole SDNN.

Tight query complexity bounds for learning graph partitions

no code implementations15 Dec 2021 Xizhi Liu, Sayan Mukherjee

Given a partition of a graph into connected components, the membership oracle asserts whether any two vertices of the graph lie in the same component or not.

Accelerating Markov Random Field Inference with Uncertainty Quantification

no code implementations2 Aug 2021 Ramin Bashizade, Xiangyu Zhang, Sayan Mukherjee, Alvin R. Lebeck

In this paper, we propose a high-throughput accelerator for Markov Random Field (MRF) inference, a powerful model for representing a wide range of applications, using MCMC with Gibbs sampling.

Motion Estimation Playing the Game of 2048 +1

A Methodology for Exploring Deep Convolutional Features in Relation to Hand-Crafted Features with an Application to Music Audio Modeling

1 code implementation31 May 2021 Anna K. Yanchenko, Mohammadreza Soltani, Robert J. Ravier, Sayan Mukherjee, Vahid Tarokh

In this work, we instead take the perspective of relating deep features to well-studied, hand-crafted features that are meaningful for the application of interest.

Feature Importance

At the Intersection of Deep Sequential Model Framework and State-space Model Framework: Study on Option Pricing

no code implementations14 Dec 2020 Ziyang Ding, Sayan Mukherjee

Reservoir computing and deep sequential models, on the one hand, have demonstrated efficient, robust, and superior performance in modeling simple and chaotic dynamical systems.

Stanza: A Nonlinear State Space Model for Probabilistic Inference in Non-Stationary Time Series

no code implementations11 Jun 2020 Anna K. Yanchenko, Sayan Mukherjee

Stanza strikes a balance between competitive forecasting accuracy and probabilistic, interpretable inference for highly structured time series.

Time Series Time Series Analysis

Beyond Application End-Point Results: Quantifying Statistical Robustness of MCMC Accelerators

no code implementations5 Mar 2020 Xiangyu Zhang, Ramin Bashizade, Yicheng Wang, Cheng Lyu, Sayan Mukherjee, Alvin R. Lebeck

Applying the framework to guide design space exploration shows that statistical robustness comparable to floating-point software can be achieved by slightly increasing the bit representation, without floating-point hardware requirements.

A Case for Quantifying Statistical Robustness of Specialized Probabilistic AI Accelerators

no code implementations27 Oct 2019 Xiangyu Zhang, Sayan Mukherjee, Alvin R. Lebeck

Although a common approach is to compare the end-point result quality using community-standard benchmarks and metrics, we claim a probabilistic architecture should provide some measure (or guarantee) of statistical robustness.

Scalable Modeling of Spatiotemporal Data using the Variational Autoencoder: an Application in Glaucoma

1 code implementation24 Aug 2019 Samuel I. Berchuck, Felipe A. Medeiros, Sayan Mukherjee

As big spatial data becomes increasingly prevalent, classical spatiotemporal (ST) methods often do not scale well.

Bayesian Inference

Adaptive particle-based approximations of the Gibbs posterior for inverse problems

no code implementations2 Jul 2019 Zilong Zou, Sayan Mukherjee, Harbir Antil, Wilkins Aquino

To manage the computational cost of propagating increasing numbers of particles through the loss function, we employ a recently developed local reduced basis method to build an efficient surrogate loss function that is used in the Gibbs update formula in place of the true loss.

Bayesian Inference

Subspace Clustering through Sub-Clusters

1 code implementation15 Nov 2018 Weiwei Li, Jan Hannig, Sayan Mukherjee

The problem of dimension reduction is of increasing importance in modern data analysis.

Clustering Dimensionality Reduction

Scalable Algorithms for Learning High-Dimensional Linear Mixed Models

1 code implementation12 Mar 2018 Zilong Tan, Kimberly Roche, Xiang Zhou, Sayan Mukherjee

We provide theoretical guarantees for our learning algorithms, demonstrating the robustness of parameter estimation.

Vocal Bursts Intensity Prediction

Learning Integral Representations of Gaussian Processes

1 code implementation21 Feb 2018 Zilong Tan, Sayan Mukherjee

We propose a representation of Gaussian processes (GPs) based on powers of the integral operator defined by a kernel function, we call these stochastic processes integral Gaussian processes (IGPs).

Dimensionality Reduction Gaussian Processes +1

Partitioned Tensor Factorizations for Learning Mixed Membership Models

no code implementations ICML 2017 Zilong Tan, Sayan Mukherjee

We present an efficient algorithm for learning mixed membership models when the number of variables p is much larger than the number of hidden components k. This algorithm reduces the computational complexity of state-of-the-art tensor methods, which require decomposing an $O(p^3)$ tensor, to factorizing $O(p/k)$ sub-tensors each of size $O(k^3)$.

Efficient Learning of Mixed Membership Models

1 code implementation25 Feb 2017 Zilong Tan, Sayan Mukherjee

We present an efficient algorithm for learning mixed membership models when the number of variables $p$ is much larger than the number of hidden components $k$.

Functional Data Analysis using a Topological Summary Statistic: the Smooth Euler Characteristic Transform

2 code implementations21 Nov 2016 Lorin Crawford, Anthea Monod, Andrew X. Chen, Sayan Mukherjee, Raúl Rabadán

We introduce a novel statistic, the smooth Euler characteristic transform (SECT), which is designed to integrate shape information into regression models by representing shapes and surfaces as a collection of curves.

Applications

Fast moment estimation for generalized latent Dirichlet models

no code implementations17 Mar 2016 Shiwen Zhao, Barbara E. Engelhardt, Sayan Mukherjee, David B. Dunson

We illustrate the utility of our approach on simulated data, comparing results from MELD to alternative methods, and we show the promise of our approach through the application of MELD to several data sets.

Variational Inference

Bayesian Approximate Kernel Regression with Variable Selection

1 code implementation5 Aug 2015 Lorin Crawford, Kris C. Wood, Xiang Zhou, Sayan Mukherjee

State-of-the-art methods for genomic selection and association mapping are based on kernel regression and linear models, respectively.

Binary Classification regression +1

Adaptive Randomized Dimension Reduction on Massive Data

no code implementations13 Apr 2015 Gregory Darnell, Stoyan Georgiev, Sayan Mukherjee, Barbara E. Engelhardt

In this paper we develop an approach for dimension reduction that exploits the assumption of low rank structure in high dimensional data to gain both computational and statistical advantages.

Dimensionality Reduction

Bayesian group latent factor analysis with structured sparsity

1 code implementation11 Nov 2014 Shiwen Zhao, Chuan Gao, Sayan Mukherjee, Barbara E. Engelhardt

Latent factor models are the canonical statistical tool for exploratory analyses of low-dimensional linear structure for an observation matrix with p features across n samples.

The Information Geometry of Mirror Descent

no code implementations29 Oct 2013 Garvesh Raskutti, Sayan Mukherjee

Using this equivalence, it follows that (1) mirror descent is the steepest descent direction along the Riemannian manifold of the exponential family; (2) mirror descent with log-likelihood loss applied to parameter estimation in exponential families asymptotically achieves the classical Cram\'er-Rao lower bound and (3) natural gradient descent for manifolds corresponding to exponential families can be implemented as a first-order method through mirror descent.

Randomized Dimension Reduction on Massive Data

no code implementations7 Nov 2012 Stoyan Georgiev, Sayan Mukherjee

Scalability of statistical estimators is of increasing importance in modern applications and dimension reduction is often used to extract relevant information from data.

Dimensionality Reduction regression

Geometric Representations of Random Hypergraphs

no code implementations18 Dec 2009 Simón Lunagómez, Sayan Mukherjee, Robert L. Wolpert, Edoardo M. Airoldi

A parametrization of hypergraphs based on the geometry of points in $\mathbf{R}^d$ is developed.

Cannot find the paper you are looking for? You can Submit a new open access paper.