no code implementations • 20 Mar 2020 • Dingkang Wang, Lucas Magee, Bing-Xing Huo, Samik Banerjee, Xu Li, Jaikishan Jayakumar, Meng Kuan Lin, Keerthi Ram, Suyi Wang, Yusu Wang, Partha P. Mitra
Neuroscientific data analysis has traditionally relied on linear algebra and stochastic process theory.
no code implementations • 9 Jun 2019 • Partha P. Mitra
We introduce a generative and fitting model pair ("Misparametrized Sparse Regression" or MiSpaR) and show that the overfitting peak can be dissociated from the point at which the fitting function gains enough dof's to match the data generative model and thus provides good generalization.
no code implementations • 17 May 2019 • Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Shumpei Kikuta, Dong Liu, Partha P. Mitra, Mikael Skoglund
We design a self size-estimating feed-forward network (SSFN) using a joint optimization approach for estimation of number of layers, number of nodes and learning of weight matrices.
no code implementations • 31 Mar 2018 • Ahmed Zaki, Saikat Chatterjee, Partha P. Mitra, Lars K. Rasmussen
Our expectation is that local estimates in each node improve fast and converge, resulting in a limited demand for communication of estimates between nodes and reducing the processing time.
no code implementations • 8 Mar 2018 • Partha P. Mitra
This analysis is made possible since the SGD algorithm reduces to a stochastic linear system near the interpolating minimum of the loss function.
1 code implementation • 23 Oct 2017 • Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Partha P. Mitra, Mikael Skoglund
The developed network is expected to show good generalization power due to appropriate regularization and use of random weights in the layers.
no code implementations • 22 Sep 2017 • Ahmed Zaki, Partha P. Mitra, Lars K. Rasmussen, Saikat Chatterjee
The algorithm is iterative and exchanges intermediate estimates of a sparse signal over a network.