no code implementations • 20 Mar 2024 • Tucker Balch, Vamsi K. Potluru, Deepak Paramanand, Manuela Veloso
In addition to the benefits it provides, such as improved financial modeling and better testing procedures, it poses privacy risks as well.
no code implementations • 1 Jan 2024 • Yinan Cheng, Chi-Hua Wang, Vamsi K. Potluru, Tucker Balch, Guang Cheng
Devising procedures for downstream task-oriented generative model selections is an unresolved problem of practical importance.
no code implementations • 29 Dec 2023 • Vamsi K. Potluru, Daniel Borrajo, Andrea Coletta, Niccolò Dalmasso, Yousef El-Laham, Elizabeth Fons, Mohsen Ghassemi, Sriram Gopalakrishnan, Vikesh Gosai, Eleonora Kreačić, Ganapathy Mani, Saheed Obitayo, Deepak Paramanand, Natraj Raman, Mikhail Solonin, Srijan Sood, Svitlana Vyetrenko, Haibei Zhu, Manuela Veloso, Tucker Balch
Synthetic data has made tremendous strides in various commercial settings including finance, healthcare, and virtual reality.
no code implementations • 9 Nov 2023 • Zikai Xiong, Niccolò Dalmasso, Shubham Sharma, Freddy Lecue, Daniele Magazzeni, Vamsi K. Potluru, Tucker Balch, Manuela Veloso
In this work, we present fair Wasserstein coresets (FWC), a novel coreset approach which generates fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks.
no code implementations • 31 Oct 2023 • Zikai Xiong, Niccolò Dalmasso, Alan Mishler, Vamsi K. Potluru, Tucker Balch, Manuela Veloso
FairWASP can therefore be used to construct datasets which can be fed into any classification method, not just methods which accept sample weights.
no code implementations • 24 Oct 2023 • Rongzhe Wei, Eleonora Kreačić, Haoyu Wang, Haoteng Yin, Eli Chien, Vamsi K. Potluru, Pan Li
Focusing on per-instance differential privacy (pDP), our framework elucidates the potential privacy leakage for each data point in a given training dataset, offering insights into data preprocessing to reduce privacy risks of the synthetic dataset generation via DDMs.
1 code implementation • 20 Oct 2023 • Mufei Li, Eleonora Kreačić, Vamsi K. Potluru, Pan Li
However, these models face challenges in generating large attributed graphs due to the complex attribute-structure correlations and the large size of these graphs.
no code implementations • 10 Sep 2023 • Fadi Hamad, Shinpei Nakamura-Sakai, Saheed Obitayo, Vamsi K. Potluru
Synthetic data generation has emerged as a crucial topic for financial institutions, driven by multiple factors, such as privacy protection and data augmentation.
no code implementations • 19 Jun 2023 • Eleonora Kreačić, Navid Nouri, Vamsi K. Potluru, Tucker Balch, Manuela Veloso
Creation of a synthetic dataset that faithfully represents the data distribution and simultaneously preserves privacy is a major research challenge.
no code implementations • 12 Dec 2022 • Renbo Zhao, Niccolò Dalmasso, Mohsen Ghassemi, Vamsi K. Potluru, Tucker Balch, Manuela Veloso
Hawkes processes have recently risen to the forefront of tools when it comes to modeling and generating sequential events data.
no code implementations • 16 Aug 2022 • Mohsen Ghassemi, Niccolò Dalmasso, Simran Lamba, Vamsi K. Potluru, Sameena Shah, Tucker Balch, Manuela Veloso
Online learning of Hawkes processes has received increasing attention in the last couple of years especially for modeling a network of actors.
no code implementations • 27 Jul 2022 • Mohsen Ghassemi, Eleonora Kreačić, Niccolò Dalmasso, Vamsi K. Potluru, Tucker Balch, Manuela Veloso
Hawkes processes have recently gained increasing attention from the machine learning community for their versatility in modeling event sequence data.
no code implementations • 8 Feb 2022 • Cenk Baykal, Vamsi K. Potluru, Sameena Shah, Manuela M. Veloso
Most of the existing work focuses primarily on the monoplex setting where we have access to a network with only a single type of connection between entities.
1 code implementation • 6 Jun 2021 • Junteng Jia, Cenk Baykal, Vamsi K. Potluru, Austin R. Benson
With the wide-spread availability of complex relational data, semi-supervised node classification in graphs has become a central machine learning problem.
no code implementations • 3 Nov 2020 • Daniel Borrajo, Sriram Gopalakrishnan, Vamsi K. Potluru
In this paper, we adapt state-of-the-art learning techniques to goal recognition, and compare model-based and model-free approaches in different domains.
no code implementations • 9 Apr 2020 • Robert E. Tillman, Vamsi K. Potluru, Jiahao Chen, Prashant Reddy, Manuela Veloso
Through experiments with simulated and real world scientific collaboration, transportation and global trade networks, we demonstrate that the proposed heuristics show increased performance with the richness of connection type correlation structure and significantly outperform their baseline heuristics for ordinary networks with a single connection type.
no code implementations • 9 Dec 2019 • Riyasat Ohib, Nicolas Gillis, Niccolò Dalmasso, Sameena Shah, Vamsi K. Potluru, Sergey Plis
Instead, in our approach we set the sparsity level for the whole set explicitly and simultaneously project a group of vectors with the sparsity level of each vector tuned automatically.
no code implementations • 3 Jun 2018 • Sumeet Katariya, Branislav Kveton, Zheng Wen, Vamsi K. Potluru
In many practical problems, a learning agent may want to learn the best action in hindsight without ever taking a bad action, which is significantly worse than the default production action.
1 code implementation • 15 Jan 2013 • Vamsi K. Potluru, Sergey M. Plis, Jonathan Le Roux, Barak A. Pearlmutter, Vince D. Calhoun, Thomas P. Hayes
However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms.