no code implementations • 13 Dec 2023 • Huao Li, Yao Fan, Keyang Zheng, Michael Lewis, Katia Sycara
Our proposed approach is agnostic to task environment and RL model structure, therefore has the potential to be generalized to a wide range of applications.
no code implementations • 9 Nov 2023 • Simon Stepputtis, Joseph Campbell, Yaqi Xie, Zhengyang Qi, Wenxin Sharon Zhang, Ruiyi Wang, Sanketh Rangreji, Michael Lewis, Katia Sycara
We discuss the capabilities of LLMs to utilize deceptive long-horizon conversations between six human players to determine each player's goal and motivation.
no code implementations • 16 Oct 2023 • Huao Li, Yu Quan Chong, Simon Stepputtis, Joseph Campbell, Dana Hughes, Michael Lewis, Katia Sycara
While Large Language Models (LLMs) have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored.
no code implementations • 19 Jan 2022 • Seth Karten, Mycal Tucker, Huao Li, Siva Kailas, Michael Lewis, Katia Sycara
In human-agent teams tested in benchmark environments, where agents have been modeled using the Enforcers, we find that a prototype-based method produces meaningful discrete tokens that enable human partners to learn agent communication faster and better than a one-hot baseline.
no code implementations • NeurIPS 2021 • Mycal Tucker, Huao Li, Siddharth Agrawal, Dana Hughes, Katia Sycara, Michael Lewis, Julie Shah
Neural agents trained in reinforcement learning settings can learn to communicate among themselves via discrete tokens, accomplishing as a team what agents would be unable to do alone.
no code implementations • 7 Mar 2021 • Tianwei Ni, Huao Li, Siddharth Agrawal, Suhas Raja, Fan Jia, Yikang Gui, Dana Hughes, Michael Lewis, Katia Sycara
Previous human-human team research have shown complementary policies in TSF game and diversity in human players' skill, which encourages us to relax the assumptions on human policy.
1 code implementation • 5 Dec 2020 • Madina Abdrakhmanova, Askat Kuzdeuov, Sheikh Jarju, Yerbolat Khassanov, Michael Lewis, Huseyin Atakan Varol
We present SpeakingFaces as a publicly-available large-scale multimodal dataset developed to support machine learning research in contexts that utilize a combination of thermal, visual, and audio data streams; examples include human-computer interaction, biometric authentication, recognition systems, domain transfer, and speech recognition.
no code implementations • 15 Nov 2020 • Vidhi Jain, Rohit Jena, Huao Li, Tejus Gupta, Dana Hughes, Michael Lewis, Katia Sycara
In our efforts to model the rescuer's mind, we begin with a simple simulated search and rescue task in Minecraft with human participants.
no code implementations • 4 Aug 2020 • Xinzhi Wang, Huao Li, HUI ZHANG, Michael Lewis, Katia Sycara
The results show that verbal explanation generated by both models improve subjective satisfaction of users towards the interpretability of DRL systems.
1 code implementation • 17 Sep 2018 • Rahul Iyer, Yuezhang Li, Huao Li, Michael Lewis, Ramitha Sundar, Katia Sycara
For those systems to be accepted and trusted, the users should be able to understand the reasoning process of the system, i. e. the system should be transparent.