Search Results for author: Chiranjib Bhattacharyya

Found 27 papers, 4 papers with code

Near-optimal sample complexity bounds for learning Latent $k-$polytopes and applications to Ad-Mixtures

no code implementations ICML 2020 Chiranjib Bhattacharyya, Ravindran Kannan

This is a corollary of the major contribution of the current paper: the first sample complexity upper bound for the problem (introduced in \cite{BK20}) of learning the vertices of a Latent $k-$ Polytope in ${\bf R}^d$, given perturbed points from it.

Random Separating Hyperplane Theorem and Learning Polytopes

no code implementations21 Jul 2023 Chiranjib Bhattacharyya, Ravindran Kannan, Amit Kumar

Our first result, Random Separating Hyperplane Theorem (RSH), is a strengthening of this for polytopes.

BNSynth: Bounded Boolean Functional Synthesis

1 code implementation15 Dec 2022 Ravi Raja, Stanly Samuel, Chiranjib Bhattacharyya, Deepak D'Souza, Aditya Kanade

In this paper, we introduce a tool BNSynth, that is the first to solve the BFS problem under a given bound on the solution space.

Shot-frugal and Robust quantum kernel classifiers

no code implementations13 Oct 2022 Abhay Shastry, Abhijith Jayakumar, Apoorva Patel, Chiranjib Bhattacharyya

Quantum kernel methods are a candidate for quantum speed-ups in supervised machine learning.

Classification

Rawlsian Fair Adaptation of Deep Learning Classifiers

no code implementations31 May 2021 Kulin Shah, Pooja Gupta, Amit Deshpande, Chiranjib Bhattacharyya

Given any score function or feature representation and only its second-order statistics on the sensitive sub-populations, we seek a threshold classifier on the given score or a linear threshold classifier on the given feature representation that achieves the Rawls error rate restricted to this hypothesis class.

Fairness

DSLR: Dynamic to Static LiDAR Scan Reconstruction Using Adversarially Trained Autoencoder

1 code implementation26 May 2021 Prashant Kumar, Sabyasachi Sahoo, Vanshil Shah, Vineetha Kondameedi, Abhinav Jain, Akshaj Verma, Chiranjib Bhattacharyya, Vinay Viswanathan

We show that DSLR, unlike the existing baselines, is a practically viable model with its reconstruction quality within the tolerable limits for tasks pertaining to autonomous navigation like SLAM in dynamic environments.

Autonomous Navigation Unsupervised Domain Adaptation

Learning a Latent Simplex in Input-Sparsity Time

no code implementations17 May 2021 Ainesh Bakshi, Chiranjib Bhattacharyya, Ravi Kannan, David P. Woodruff, Samson Zhou

We consider the problem of learning a latent $k$-vertex simplex $K\subset\mathbb{R}^d$, given access to $A\in\mathbb{R}^{d\times n}$, which can be viewed as a data matrix with $n$ points that are obtained by randomly perturbing latent points in the simplex $K$ (potentially beyond $K$).

Topic Models

Learning a Latent Simplex in Input Sparsity Time

no code implementations ICLR 2021 Ainesh Bakshi, Chiranjib Bhattacharyya, Ravi Kannan, David Woodruff, Samson Zhou

Bhattacharyya and Kannan (SODA 2020) give an algorithm for learning such a $k$-vertex latent simplex in time roughly $O(k\cdot\text{nnz}(\mathbf{A}))$, where $\text{nnz}(\mathbf{A})$ is the number of non-zeros in $\mathbf{A}$.

Clustering Topic Models

Algorithms for finding $k$ in $k$-means

no code implementations8 Dec 2020 Chiranjib Bhattacharyya, Ravindran Kannan, Amit Kumar

Two challenges are open: (i) Is there a data-determined definition of $k$ which is provably correct and (ii) Is there a polynomial time algorithm to find $k$ from data ?

Clustering

Analysis of Knowledge Transfer in Kernel Regime

no code implementations30 Mar 2020 Arman Rahbar, Ashkan Panahi, Chiranjib Bhattacharyya, Devdatt Dubhashi, Morteza Haghir Chehreghani

Knowledge transfer is shown to be a very successful technique for training neural classifiers: together with the ground truth data, it uses the "privileged information" (PI) obtained by a "teacher" network to train a "student" network.

Knowledge Distillation Transfer Learning

Word2Sense: Sparse Interpretable Word Embeddings

no code implementations ACL 2019 Abhishek Panigrahi, Harsha Vardhan Simhadri, Chiranjib Bhattacharyya

We present an unsupervised method to generate Word2Sense word embeddings that are interpretable {---} each dimension of the embedding space corresponds to a fine-grained sense, and the non-negative value of the embedding along the j-th dimension represents the relevance of the j-th sense to the word.

Word Embeddings Word Similarity

Finding a latent k-simplex in O(k . nnz(data)) time via Subset Smoothing

no code implementations14 Apr 2019 Chiranjib Bhattacharyya, Ravindran Kannan

In this paper we show that a large class of Latent variable models, such as Mixed Membership Stochastic Block(MMSB) Models, Topic Models, and Adversarial Clustering, can be unified through a geometric perspective, replacing model specific assumptions and algorithms for individual models.

Clustering Community Detection +1

How Many Pairwise Preferences Do We Need to Rank A Graph Consistently?

no code implementations6 Nov 2018 Aadirupa Saha, Rakesh Shivanna, Chiranjib Bhattacharyya

Our proposed algorithm, {\it Pref-Rank}, predicts the underlying ranking using an SVM based approach over the chosen embedding of the product graph, and is the first to provide \emph{statistical consistency} on two ranking losses: \emph{Kendall's tau} and \emph{Spearman's footrule}, with a required sample complexity of $O(n^2 \chi(\bar{G}))^{\frac{2}{3}}$ pairs, $\chi(\bar{G})$ being the \emph{chromatic number} of the complement graph $\bar{G}$.

Using Inherent Structures to design Lean 2-layer RBMs

no code implementations ICML 2018 Abhishek Bansal, Abhinav Anand, Chiranjib Bhattacharyya

Understanding the representational power of Restricted Boltzmann Machines (RBMs) with multiple layers is an ill-understood problem and is an area of active research.

Clustering by Sum of Norms: Stochastic Incremental Algorithm, Convergence and Cluster Recovery

no code implementations ICML 2017 Ashkan Panahi, Devdatt Dubhashi, Fredrik D. Johansson, Chiranjib Bhattacharyya

Standard clustering methods such as K-means, Gaussian mixture models, and hierarchical clustering are beset by local minima, which are sometimes drastically suboptimal.

Clustering

Spectral Norm Regularization of Orthonormal Representations for Graph Transduction

no code implementations NeurIPS 2015 Rakesh Shivanna, Bibaswan K. Chatterjee, Raman Sankaran, Chiranjib Bhattacharyya, Francis Bach

We propose an alternative PAC-based bound, which do not depend on the VC dimension of the underlying function class, but is related to the famous Lov\'{a}sz~$\vartheta$ function.

Learning on graphs using Orthonormal Representation is Statistically Consistent

no code implementations NeurIPS 2014 Rakesh Shivanna, Chiranjib Bhattacharyya

This, for the first time, relates labelled sample complexity to graph connectivity properties, such as the density of graphs.

A provable SVD-based algorithm for learning topics in dominant admixture corpus

no code implementations NeurIPS 2014 Trapit Bansal, Chiranjib Bhattacharyya, Ravindran Kannan

Our aim is to develop a model which makes intuitive and empirically supported assumptions and to design an algorithm with natural, simple components such as SVD, which provably solves the inference problem for the model with bounded $l_1$ error.

Topic Models

Mining Block I/O Traces for Cache Preloading with Sparse Temporal Non-parametric Mixture of Multivariate Poisson

no code implementations13 Oct 2014 Lavanya Sita Tekumalla, Chiranjib Bhattacharyya

Our first contribution addresses this gap by proposing a DP based mixture model of Multivariate Poisson (DP-MMVP) and its temporal extension(HMM-DP-MMVP) that captures the full covariance structure of multivariate count data.

Clustering

Controlled Sparsity Kernel Learning

no code implementations31 Dec 2013 Dinesh Govindaraj, Raman Sankaran, Sreedal Menon, Chiranjib Bhattacharyya

The CSKL formulation introduces a parameter t which directly corresponds to the number of kernels selected.

Object Categorization

The Lovász ϑ function, SVMs and finding large dense subgraphs

no code implementations NeurIPS 2012 Vinay Jethava, Anders Martinsson, Chiranjib Bhattacharyya, Devdatt Dubhashi

We show that the random graph with a planted clique is an example of $SVM-\theta$ graph, and as a consequence a SVM based approach easily identifies the clique in large graphs and is competitive with the state-of-the-art.

Combinatorial Optimization

Efficient algorithms for learning kernels from multiple similarity matrices with general convex loss functions

no code implementations NeurIPS 2010 Achintya Kundu, Vikram Tankasali, Chiranjib Bhattacharyya, Aharon Ben-Tal

We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case.

Cannot find the paper you are looking for? You can Submit a new open access paper.