Search Results for author: Deval Pandya

Found 5 papers, 1 papers with code

FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?

no code implementations19 Jan 2024 Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya

The rapid evolution of Large Language Models (LLMs) highlights the necessity for ethical considerations and data integrity in AI development, particularly emphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable) data principles.

Ethics Management

Mitigating Bias in Conversations: A Hate Speech Classifier and Debiaser with Prompts

no code implementations14 Jul 2023 Shaina Raza, Chen Ding, Deval Pandya

Discriminatory language and biases are often present in hate speech during conversations, which usually lead to negative impacts on targeted groups such as those based on race, gender, and religion.

Soft-prompt Tuning for Large Language Models to Evaluate Bias

no code implementations7 Jun 2023 Jacob-Junqi Tian, David Emerson, Sevil Zanjani Miyandoab, Deval Pandya, Laleh Seyyed-Kalantari, Faiza Khan Khattak

In this paper, we explore the use of soft-prompt tuning on sentiment classification task to quantify the biases of large language models (LLMs) such as Open Pre-trained Transformers (OPT) and Galactica language model.

Fairness Language Modelling +2

MLHOps: Machine Learning for Healthcare Operations

no code implementations4 May 2023 Faiza Khan Khattak, Vallijah Subasri, Amrit Krishnan, Elham Dolatabadi, Deval Pandya, Laleh Seyyed-Kalantari, Frank Rudzicz

We cover the foundational concepts of general machine learning operations, describe the initial setup of MLHOps pipelines (including data sources, preparation, engineering, and tools).

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.