no code implementations • 18 Apr 2023 • Madhav Khirwar, Karthik S. Gurumoorthy, Ankit Ajit Jain, Shantala Manchenahally
In this work, we present a system with a custom GPU-parallelized environment that consists of one warehouse and multiple stores, a novel architecture for agent-environment dynamics incorporating enhanced state and action spaces, and a shared reward specification that seeks to optimize for a large retailer's supply chain needs.
no code implementations • 18 Apr 2023 • Pranava Singhal, Waqar Mirza, Ajit Rajwade, Karthik S. Gurumoorthy
In this paper, we describe a method for estimating the joint probability density from data samples by assuming that the underlying distribution can be decomposed as a mixture of product densities with few mixture components.
no code implementations • 13 Apr 2023 • Sheel Shah, Karthik S. Gurumoorthy, Ajit Rajwade
More recently, it has been proved that one can reconstruct a 1D band-limited signal even if the exact sample locations are unknown, but given just the distribution of the sample locations and their ordering in 1D.
no code implementations • 3 Mar 2022 • Shaan ul Haque, Ajit Rajwade, Karthik S. Gurumoorthy
We create a dictionary of various families of distributions by inspecting the data, and use it to approximate each decomposed factor of the product in the mixture.
no code implementations • 9 Feb 2022 • Karthik S. Gurumoorthy, Abhiraj Hinge
In this paper, we propose a solution for the single-count shipment containing one product per box in two steps: (i) reduce it to a clustering problem in the $3$ dimensional space of length, width and height where each cluster corresponds to the group of products that will be shipped in a particular size variant, and (ii) present an efficient forward-backward decision tree based clustering method with low computational complexity on $N$ and $K$ to obtain these $K$ clusters and corresponding box dimensions.
no code implementations • 21 Jan 2022 • Naveen Nair, Karthik S. Gurumoorthy, Dinesh Mandalapu
We develop a Causal-Deep Neural Network (CDNN) model trained in two stages to infer causal impact estimates at an individual unit level.
no code implementations • 22 Mar 2021 • Jian Vora, Karthik S. Gurumoorthy, Ajit Rajwade
Joint probability mass function (PMF) estimation is a fundamental machine learning problem.
no code implementations • 18 Mar 2021 • Karthik S. Gurumoorthy, Pratik Jawanpuria, Bamdev Mishra
In this work, we develop an optimal transport (OT) based framework to select informative prototypical examples that best represent a given target dataset.
no code implementations • 5 Jun 2020 • Karthik S. Gurumoorthy, Subhajit Sanyal, Vineet Chaoji
Multiple product attributes like dimensions, weight, fragility, liquid content etc.
no code implementations • 21 Jul 2018 • Karthik S. Gurumoorthy, Amit Dhurandhar
In this paper, we show that if the optimization function is restricted-strongly-convex (RSC) and restricted-smooth (RSM) -- a rich subclass of weakly submodular functions -- then a streaming algorithm with constant factor approximation guarantee is possible.
1 code implementation • 5 Jul 2017 • Karthik S. Gurumoorthy, Amit Dhurandhar, Guillermo Cecchi, Charu Aggarwal
Prototypical examples that best summarizes and compactly represents an underlying complex data distribution communicate meaningful insights to humans in domains where simple explanations are hard to extract.
no code implementations • 28 Feb 2015 • Subhajit Sengupta, Karthik S. Gurumoorthy, Arunava Banerjee
Spike Timing Dependent Plasticity (STDP) is a Hebbian like synaptic learning rule.
no code implementations • 8 Mar 2014 • Karthik S. Gurumoorthy, Adrian M. Peter, Birmingham Hang Guan, Anand Rangarajan
In our framework, a solution to the eikonal equation is obtained in the limit as Planck's constant $\hbar$ (treated as a free parameter) tends to zero of the solution to the corresponding linear Schr\"odinger equation.
no code implementations • 13 Nov 2012 • Karthik S. Gurumoorthy, Anand Rangarajan, John Corring
We prove that the density function of the gradient of a sufficiently smooth function $S : \Omega \subset \mathbb{R}^d \rightarrow \mathbb{R}$, obtained via a random variable transformation of a uniformly distributed random variable, is increasingly closely approximated by the normalized power spectrum of $\phi=\exp\left(\frac{iS}{\tau}\right)$ as the free parameter $\tau \rightarrow 0$.
no code implementations • 13 Dec 2011 • Karthik S. Gurumoorthy, Anand Rangarajan
In other words, when $S$ and $\phi$ are related by $\phi = \exp \left(-\frac{S}{\tau} \right)$ and $\phi$ satisfies a specific linear differential equation corresponding to the extremum of a variational problem, we obtain the approximate Euclidean distance function $S = -\tau \log(\phi)$ which converges to the true solution in the limit as $\tau \rightarrow 0$.