no code implementations • 25 May 2024 • Andrew Li, Xianle Feng, Siddhant Narang, Austin Peng, Tianle Cai, Raj Sanjay Shah, Sashank Varma
The overall goal is to evaluate whether humans and LLMs are aligned in their processing of garden-path sentences and in the lingering misinterpretations past the point of disambiguation, especially when extra-syntactic information (e. g., a comma delimiting a clause boundary) is present to guide processing.
no code implementations • 25 May 2024 • Siddhartha K. Vemuri, Raj Sanjay Shah, Sashank Varma
How well do representations learned by ML models align with those of humans?
no code implementations • 21 Mar 2024 • Alicja Chaszczewicz, Raj Sanjay Shah, Ryan Louie, Bruce A Arnow, Robert Kraut, Diyi Yang
We further design a self-improvement method on top of large language models to enhance the automatic generation of feedback.
no code implementations • 18 Jan 2024 • Atith Gandhi, Raj Sanjay Shah, Vijay Marupudi, Sashank Varma
In addition, because our method is orthogonal to other methods, future research can combine training in power-law environments with other continual learning mechanisms.
no code implementations • 8 Nov 2023 • Khushi Bhardwaj, Raj Sanjay Shah, Sashank Varma
Pre-trained Large Language Models (LLMs) have shown success in a diverse set of language inference and understanding tasks.
no code implementations • 18 May 2023 • Raj Sanjay Shah, Vijay Marupudi, Reba Koenen, Khushi Bhardwaj, Sashank Varma
This research shows the utility of understanding LLMs using behavioral benchmarks and points the way to future work on the number representations of LLMs and their cognitive plausibility.
no code implementations • 15 May 2023 • Shang-Ling Hsu, Raj Sanjay Shah, Prathik Senthil, Zahra Ashktorab, Casey Dugan, Werner Geyer, Diyi Yang
Millions of users come to online peer counseling platforms to seek support on diverse topics ranging from relationship stress to anxiety.
no code implementations • 9 Nov 2022 • Raj Sanjay Shah, Faye Holt, Shirley Anugrah Hayati, Aastha Agarwal, Yi-Chia Wang, Robert E. Kraut, Diyi Yang
This work provides a deeper understanding of the use of motivational interviewing techniques on peer-to-peer counselor platforms and sheds light on how to build better training programs for volunteer counselors on online platforms.
no code implementations • 31 Oct 2022 • Raj Sanjay Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du, Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, Diyi Yang
To this end, we contribute the Financial Language Understanding Evaluation (FLUE), an open-source comprehensive suite of benchmarks for the financial domain.