Search Results for author: Mycal Tucker

Found 11 papers, 4 papers with code

An Information Bottleneck Characterization of the Understanding-Workload Tradeoff

1 code implementation11 Oct 2023 Lindsay Sanneman, Mycal Tucker, Julie Shah

This empirical link between human factors and information-theoretic concepts provides an important mathematical characterization of the workload-understanding tradeoff which enables user-tailored XAI design.

Informativeness

Towards True Lossless Sparse Communication in Multi-Agent Systems

no code implementations30 Nov 2022 Seth Karten, Mycal Tucker, Siva Kailas, Katia Sycara

We evaluate the learned communication `language' through direct causal analysis of messages in non-sparse runs to determine the range of lossless sparse budgets, which allow zero-shot sparsity, and the range of sparse budgets that will inquire a reward loss, which is minimized by our learned gating function with few-shot sparsity.

Representation Learning

Towards Human-Agent Communication via the Information Bottleneck Principle

no code implementations30 Jun 2022 Mycal Tucker, Julie Shah, Roger Levy, Noga Zaslavsky

Emergent communication research often focuses on optimizing task-specific utility as a driver for communication.

Informativeness

Prototype Based Classification from Hierarchy to Fairness

1 code implementation27 May 2022 Mycal Tucker, Julie Shah

Artificial neural nets can represent and classify many types of data but are often tailored to particular applications -- e. g., for "fair" or "hierarchical" classification.

Classification Fairness

Probe-Based Interventions for Modifying Agent Behavior

no code implementations26 Jan 2022 Mycal Tucker, William Kuhl, Khizer Shahid, Seth Karten, Katia Sycara, Julie Shah

Neural nets are powerful function approximators, but the behavior of a given neural net, once trained, cannot be easily modified.

Decision Making Multi-agent Reinforcement Learning +2

Interpretable Learned Emergent Communication for Human-Agent Teams

no code implementations19 Jan 2022 Seth Karten, Mycal Tucker, Huao Li, Siva Kailas, Michael Lewis, Katia Sycara

In human-agent teams tested in benchmark environments, where agents have been modeled using the Enforcers, we find that a prototype-based method produces meaningful discrete tokens that enable human partners to learn agent communication faster and better than a one-hot baseline.

Multi-agent Reinforcement Learning

Emergent Discrete Communication in Semantic Spaces

no code implementations NeurIPS 2021 Mycal Tucker, Huao Li, Siddharth Agrawal, Dana Hughes, Katia Sycara, Michael Lewis, Julie Shah

Neural agents trained in reinforcement learning settings can learn to communicate among themselves via discrete tokens, accomplishing as a team what agents would be unable to do alone.

What if This Modified That? Syntactic Interventions via Counterfactual Embeddings

1 code implementation28 May 2021 Mycal Tucker, Peng Qian, Roger Levy

Neural language models exhibit impressive performance on a variety of tasks, but their internal reasoning may be difficult to understand.

counterfactual

Adversarially Guided Self-Play for Adopting Social Conventions

no code implementations16 Jan 2020 Mycal Tucker, Yilun Zhou, Julie Shah

Robotic agents must adopt existing social conventions in order to be effective teammates.

Cannot find the paper you are looking for? You can Submit a new open access paper.