no code implementations • 17 Jan 2024 • Geetanjali Bihani, Julia Taylor Rayz
The advent of large language models (LLMs) has enabled significant performance gains in the field of natural language processing.
1 code implementation • 30 Apr 2023 • Geetanjali Bihani, Julia Taylor Rayz
Neural network-based decisions tend to be overconfident, where their raw outcome probabilities do not align with the true decision probabilities.
no code implementations • 12 Mar 2022 • Geetanjali Bihani, Julia Taylor Rayz
With data privacy becoming more of a necessity than a luxury in today's digital world, research on more robust models of privacy preservation and information security is on the rise.
no code implementations • 5 Dec 2021 • Geetanjali Bihani
Contextual word representations generated by language models (LMs) learn spurious associations present in the training corpora.
no code implementations • NAACL (DeeLIO) 2021 • Geetanjali Bihani, Julia Taylor Rayz
Contextual word representation models have shown massive improvements on a multitude of NLP tasks, yet their word sense disambiguation capabilities remain poorly explained.
no code implementations • 22 Apr 2021 • Geetanjali Bihani, Julia Taylor Rayz
In this work, we propose a scheme to address the ambiguity in single-intent as well as multi-intent natural language utterances by creating degree memberships over fuzzified intent classes.
no code implementations • 14 Dec 2020 • Geetanjali Bihani, Julia Taylor Rayz
Static word embeddings encode word associations, extensively utilized in downstream NLP tasks.