Search Results for author: Geetanjali Bihani

Found 7 papers, 1 papers with code

Learning Shortcuts: On the Misleading Promise of NLU in Language Models

no code implementations17 Jan 2024 Geetanjali Bihani, Julia Taylor Rayz

The advent of large language models (LLMs) has enabled significant performance gains in the field of natural language processing.

Natural Language Understanding

Calibration Error Estimation Using Fuzzy Binning

1 code implementation30 Apr 2023 Geetanjali Bihani, Julia Taylor Rayz

Neural network-based decisions tend to be overconfident, where their raw outcome probabilities do not align with the true decision probabilities.

On Information Hiding in Natural Language Systems

no code implementations12 Mar 2022 Geetanjali Bihani, Julia Taylor Rayz

With data privacy becoming more of a necessity than a luxury in today's digital world, research on more robust models of privacy preservation and information security is on the rise.

Interpretable Privacy Preservation of Text Representations Using Vector Steganography

no code implementations5 Dec 2021 Geetanjali Bihani

Contextual word representations generated by language models (LMs) learn spurious associations present in the training corpora.

Low Anisotropy Sense Retrofitting (LASeR) : Towards Isotropic and Sense Enriched Representations

no code implementations NAACL (DeeLIO) 2021 Geetanjali Bihani, Julia Taylor Rayz

Contextual word representation models have shown massive improvements on a multitude of NLP tasks, yet their word sense disambiguation capabilities remain poorly explained.

Word Sense Disambiguation

Fuzzy Classification of Multi-intent Utterances

no code implementations22 Apr 2021 Geetanjali Bihani, Julia Taylor Rayz

In this work, we propose a scheme to address the ambiguity in single-intent as well as multi-intent natural language utterances by creating degree memberships over fuzzified intent classes.

Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.