no code implementations • 1 Jun 2023 • Natalie Abreu, Nathan Vaska, Victoria Helus
We evaluate whether the method increases semantic alignment by evaluating model performance on adversarially perturbed data, with the idea that it should be easier for an adversary to switch one class to a similarly represented class.
no code implementations • 1 Jun 2023 • Nathan Vaska, Victoria Helus
The impressive advances and applications of large language and joint language-and-visual understanding models has led to an increased need for methods of probing their potential reasoning capabilities.
no code implementations • 21 Nov 2022 • Natalie Abreu, Nathan Vaska, Victoria Helus
Most robust training techniques aim to improve model accuracy on perturbed inputs; as an alternate form of robustness, we aim to reduce the severity of mistakes made by neural networks in challenging conditions.
1 code implementation • 17 Aug 2022 • Pradyumna Tambwekar, Lakshita Dodeja, Nathan Vaska, Wei Xu, Matthew Gombolay
Leveraging a game environment, we collect a dataset of over 1000 examples, mapping language strategies to the corresponding goals and constraints, and show that our model, trained on this dataset, significantly outperforms human interpreters in inferring strategic intent (i. e., goals and constraints) from language (p < 0. 05).
no code implementations • 17 Mar 2022 • Nathan Vaska, Kevin Leahy, Victoria Helus
In this work, we leverage contextual awareness for the anomaly detection problem.