no code implementations • 29 Dec 2023 • Vamsi K. Potluru, Daniel Borrajo, Andrea Coletta, Niccolò Dalmasso, Yousef El-Laham, Elizabeth Fons, Mohsen Ghassemi, Sriram Gopalakrishnan, Vikesh Gosai, Eleonora Kreačić, Ganapathy Mani, Saheed Obitayo, Deepak Paramanand, Natraj Raman, Mikhail Solonin, Srijan Sood, Svitlana Vyetrenko, Haibei Zhu, Manuela Veloso, Tucker Balch
Synthetic data has made tremendous strides in various commercial settings including finance, healthcare, and virtual reality.
no code implementations • 28 Sep 2023 • Tom Bamford, Andrea Coletta, Elizabeth Fons, Sriram Gopalakrishnan, Svitlana Vyetrenko, Tucker Balch, Manuela Veloso
Moreover, the required storage, computational time, and retrieval complexity to search in the time-series space are often non-trivial.
no code implementations • 23 Aug 2023 • Haochen Wu, Shubham Sharma, Sunandita Patra, Sriram Gopalakrishnan
However, the uncertainties of feature changes and the risk of higher than average costs in recourse have not been considered.
no code implementations • 28 Feb 2023 • Tung Thai, Ming Shen, Mayank Garg, Ayush Kalani, Nakul Vaidya, Utkarsh Soni, Mudit Verma, Sriram Gopalakrishnan, Neeraj Varshney, Chitta Baral, Subbarao Kambhampati, Jivko Sinapov, Matthias Scheutz
Learning to detect, characterize and accommodate novelties is a challenge that agents operating in open-world domains need to address to be able to guarantee satisfactory task performance.
2 code implementations • 11 Nov 2022 • Ayal Taitler, Michael Gimelfarb, Jihwan Jeong, Sriram Gopalakrishnan, Martin Mladenov, Xiaotian Liu, Scott Sanner
We present pyRDDLGym, a Python framework for auto-generation of OpenAI Gym environments from RDDL declerative description.
1 code implementation • 15 Sep 2021 • Sriram Gopalakrishnan, Mudit Verma, Subbarao Kambhampati
We present a framework to model the human agent's behavior with respect to state uncertainty, and can be used to compute MDP policies that accounts for these problems.
no code implementations • 9 Jul 2021 • Sriram Gopalakrishnan, Utkarsh Soni, Tung Thai, Panagiotis Lymperopoulos, Matthias Scheutz, Subbarao Kambhampati
The game of monopoly is an adversarial multi-agent domain where there is no fixed goal other than to be the last player solvent, There are useful subgoals like monopolizing sets of properties, and developing them.
no code implementations • 17 Dec 2020 • Sumeru Hazra, Anirban Bhattacharjee, Madhavi Chand, Kishor V. Salunkhe, Sriram Gopalakrishnan, Meghan P. Patankar, R. Vijay
Qubit coherence and gate fidelity are typically considered the two most important metrics for characterizing a quantum processor.
Quantum Physics
no code implementations • 3 Nov 2020 • Daniel Borrajo, Sriram Gopalakrishnan, Vamsi K. Potluru
In this paper, we adapt state-of-the-art learning techniques to goal recognition, and compare model-based and model-free approaches in different domains.
no code implementations • 28 Oct 2020 • Sriram Gopalakrishnan, Subbarao Kambhampati
In situations where humans and robots are moving in the same space whilst performing their own tasks, predictable paths taken by mobile robots can not only make the environment feel safer, but humans can also help with the navigation in the space by avoiding path conflicts or not blocking the way.
1 code implementation • 4 Jun 2020 • Sriram Gopalakrishnan, Liron Cohen, Sven Koenig, T. K. Satish Kumar
FastMap is an efficient embedding algorithm that facilitates a geometric interpretation of problems posed on undirected graphs.
no code implementations • 15 Feb 2020 • Sriram Gopalakrishnan, Utkarsh Soni
Learning the preferences of a human improves the quality of the interaction with the human.
no code implementations • 24 Nov 2018 • Sriram Gopalakrishnan, Subbarao Kambhampati
TGE-viz allows users to visualize and criticize plans more intuitively for mixed-initiative planning.
no code implementations • 5 Dec 2017 • Yantian Zha, Yikang Li, Sriram Gopalakrishnan, Baoxin Li, Subbarao Kambhampati
The first involves resampling the distribution sequences to single action sequences, from which we could learn an action affinity model based on learned action (word) embeddings for plan recognition.