1 code implementation • 1 Mar 2023 • Brittany Cates, Anagha Kulkarni, Sarath Sreedharan
In this paper, we propose a planning framework to generate a defense strategy against an attacker who is working in an environment where a defender can operate without the attacker's knowledge.
no code implementations • 2 May 2021 • Anagha Kulkarni, Siddharth Srivastava, Subbarao Kambhampati
This paper addresses the problem of synthesizing the behavior of an AI agent that provides proactive task assistance to a human in settings like factory floors where they may coexist in a common environment.
no code implementations • 21 Apr 2021 • Sarath Sreedharan, Anagha Kulkarni, David E. Smith, Subbarao Kambhampati
Existing approaches for generating human-aware agent behaviors have considered different measures of interpretability in isolation.
no code implementations • 22 Nov 2020 • Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, David E. Smith, Subbarao Kambhampati
Existing approaches for the design of interpretable agent behavior consider different measures of interpretability in isolation.
no code implementations • 2 Jul 2020 • Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata Chakraborti, David Smith, Subbarao Kambhampati
Given structured environments (like warehouses and restaurants), it may be possible to design the environment so as to boost the interpretability of the robot's behavior or to shape the human's expectations of the robot's behavior.
no code implementations • 25 May 2019 • Anagha Kulkarni, Siddharth Srivastava, Subbarao Kambhampati
In order to be useful in the real world, AI agents need to plan and act in the presence of others, who may include adversarial and cooperative entities.
no code implementations • 23 Nov 2018 • Tathagata Chakraborti, Anagha Kulkarni, Sarath Sreedharan, David E. Smith, Subbarao Kambhampati
There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop.
no code implementations • 16 Feb 2018 • Anagha Kulkarni, Siddharth Srivastava, Subbarao Kambhampati
By slightly varying our framework, we present an approach for goal legibility in cooperative settings which produces plans that achieve a goal while being consistent with at most j goals from a set of confounding goals.
no code implementations • 16 Nov 2016 • Anagha Kulkarni, Yantian Zha, Tathagata Chakraborti, Satya Gautam Vadlamudi, Yu Zhang, Subbarao Kambhampati
In order to have effective human-AI collaboration, it is necessary to address how the AI agent's behavior is being perceived by the humans-in-the-loop.
no code implementations • 25 Nov 2015 • Yu Zhang, Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, Hankz Hankui Zhuo, Subbarao Kambhampati
Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans.