Search Results for author: Krishnaram Kenthapadi

Found 36 papers, 13 papers with code

Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings

no code implementations4 Dec 2023 Gyandev Gupta, Bashir Rastegarpanah, Amalendu Iyer, Joshua Rubin, Krishnaram Kenthapadi

Then we study the effectiveness of our approach when applied to text embeddings generated by both LLMs and classical embedding algorithms.

Language Modelling

Designing Closed-Loop Models for Task Allocation

1 code implementation31 May 2023 Vijay Keswani, L. Elisa Celis, Krishnaram Kenthapadi, Matthew Lease

Instead, we find ourselves in a "closed" decision-making loop in which the same fallible human decisions we rely on in practice must also be used to guide task allocation.

Decision Making

Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

no code implementations6 Jul 2022 Ana Lucic, Sheeraz Ahmad, Amanda Furtado Brinhosa, Vera Liao, Himani Agrawal, Umang Bhatt, Krishnaram Kenthapadi, Alice Xiang, Maarten de Rijke, Nicholas Drabowski

In this paper, we report on ongoing work regarding (i) the development of an AI system for flagging and explaining low-quality medical images in real-time, (ii) an interview study to understand the explanation needs of stakeholders using the AI system at OurCompany, and, (iii) a longitudinal user study design to examine the effect of including explanations on the workflow of the technicians in our clinics.

Explainable Artificial Intelligence (XAI)

Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases

no code implementations25 Jun 2022 David Munechika, Zijie J. Wang, Jack Reidy, Josh Rubin, Krishna Gade, Krishnaram Kenthapadi, Duen Horng Chau

Recent research has developed algorithms for effectively identifying intersectional bias in the form of interpretable, underperforming subsets (or slices) of the data.

A Human-Centric Take on Model Monitoring

no code implementations6 Jun 2022 Murtuza N Shergadwala, Himabindu Lakkaraju, Krishnaram Kenthapadi

Predictive models are increasingly used to make various consequential decisions in high-stakes domains such as healthcare, finance, and policy.

Fairness

Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

1 code implementation9 Apr 2022 Michael Lohaus, Matthäus Kleindessner, Krishnaram Kenthapadi, Francesco Locatello, Chris Russell

Based on this observation, we investigate an alternative fairness approach: we add a second classification head to the network to explicitly predict the protected attribute (such as race or gender) alongside the original task.

Attribute Fairness

COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks

1 code implementation ICLR 2022 Fan Wu, Linyi Li, Chejian Xu, huan zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, Bo Li

We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can significantly improve the certifications; (2) Our certification for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and environments are different, implying their intrinsic robustness properties.

Offline RL reinforcement-learning +1

Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness

1 code implementation17 Feb 2022 Kate Donahue, Alexandra Chouldechova, Krishnaram Kenthapadi

In many settings, however, the final prediction or decision of a system is under the control of a human, who uses an algorithm's output along with their own personal expertise in order to produce a combined prediction.

Fairness

Designing Closed Human-in-the-loop Deferral Pipelines

1 code implementation9 Feb 2022 Vijay Keswani, Matthew Lease, Krishnaram Kenthapadi

Our key insight is that by exploiting weak prior information, we can match experts to input examples to ensure fairness and accuracy of the resulting deferral framework, even when imperfect and biased experts are used in place of ground truth labels.

Decision Making Fairness

More Than Words: Towards Better Quality Interpretations of Text Classifiers

no code implementations23 Dec 2021 Muhammad Bilal Zafar, Philipp Schmidt, Michele Donini, Cédric Archambeau, Felix Biessmann, Sanjiv Ranjan Das, Krishnaram Kenthapadi

The large size and complex decision mechanisms of state-of-the-art text classifiers make it difficult for humans to understand their predictions, leading to a potential lack of trust by the users.

Feature Importance Sentence

Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models

no code implementations26 Nov 2021 David Nigenda, Zohar Karnin, Muhammad Bilal Zafar, Raghu Ramesha, Alan Tan, Michele Donini, Krishnaram Kenthapadi

With the increasing adoption of machine learning (ML) models and systems in high-stakes settings across different industries, guaranteeing a model's performance after deployment has become crucial.

BIG-bench Machine Learning

I-PGD-AT: Efficient Adversarial Training via Imitating Iterative PGD Attack

no code implementations29 Sep 2021 Xiaosen Wang, Bhavya Kailkhura, Krishnaram Kenthapadi, Bo Li

Finally, to demonstrate the generality of I-PGD-AT, we integrate it into PGD adversarial training and show that it can even further improve the robustness.

RVFR: Robust Vertical Federated Learning via Feature Subspace Recovery

no code implementations29 Sep 2021 Jing Liu, Chulin Xie, Krishnaram Kenthapadi, Oluwasanmi O Koyejo, Bo Li

Vertical Federated Learning (VFL) is a distributed learning paradigm that allows multiple agents to jointly train a global model when each agent holds a different subset of features for the same sample(s).

Vertical Federated Learning

Certified Robustness for Free in Differentially Private Federated Learning

no code implementations29 Sep 2021 Chulin Xie, Yunhui Long, Pin-Yu Chen, Krishnaram Kenthapadi, Bo Li

Federated learning (FL) provides an efficient training paradigm to jointly train a global model leveraging data from distributed users.

Federated Learning

Multiaccurate Proxies for Downstream Fairness

no code implementations9 Jul 2021 Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth, Saeed Sharifi-Malvajerdi

The goal of the proxy is to allow a general "downstream" learner -- with minimal assumptions on their prediction task -- to be able to use the proxy to train a model that is fair with respect to the true sensitive features.

Fairness Generalization Bounds

On the Lack of Robust Interpretability of Neural Text Classifiers

no code implementations Findings (ACL) 2021 Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi

With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models.

On Measuring the Diversity of Organizational Networks

1 code implementation14 May 2021 Zeinab S. Jalali, Krishnaram Kenthapadi, Sucheta Soundarajan

The interaction patterns of employees in social and professional networks play an important role in the success of employees and organizations as a whole.

Differentially Private Query Release Through Adaptive Projection

1 code implementation11 Mar 2021 Sergul Aydore, William Brown, Michael Kearns, Krishnaram Kenthapadi, Luca Melis, Aaron Roth, Ankit Siva

We propose, implement, and evaluate a new algorithm for releasing answers to very large numbers of statistical queries like $k$-way marginals, subject to differential privacy.

Towards Unbiased and Accurate Deferral to Multiple Experts

1 code implementation25 Feb 2021 Vijay Keswani, Matthew Lease, Krishnaram Kenthapadi

Machine learning models are often implemented in cohort with humans in the pipeline, with the model having an option to defer to a domain expert in cases where it has low confidence in its inference.

BIG-bench Machine Learning Fairness

Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy

no code implementations11 Feb 2021 Dylan Slack, Nathalie Rauschmayr, Krishnaram Kenthapadi

Each region contains a specific type of model bug; for instance, a misclassification region for an MNIST classifier contains a style of skinny 6 that the model mistakes as a 1.

BIG-bench Machine Learning

Defuse: Debugging Classifiers Through Distilling Unrestricted Adversarial Examples

no code implementations1 Jan 2021 Dylan Z Slack, Nathalie Rauschmayr, Krishnaram Kenthapadi

As a route to better discover and fix model bugs, we propose failure scenarios: regions on the data manifold that are incorrectly classified by a model.

Clustering

Minimax Group Fairness: Algorithms and Experiments

1 code implementation5 Nov 2020 Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth

We consider a recently introduced framework in which fairness is measured by worst-case outcomes across groups, rather than by the more standard differences between group outcomes.

Fairness

LiFT: A Scalable Framework for Measuring Fairness in ML Applications

no code implementations14 Aug 2020 Sriram Vasudevan, Krishnaram Kenthapadi

Many internet applications are powered by machine learned models, which are usually trained on labeled datasets obtained through either implicit / explicit user feedback signals or human judgments.

Fairness

Fairness-Aware Online Personalization

1 code implementation30 Jul 2020 G. Roshan Lal, Sahin Cem Geyik, Krishnaram Kenthapadi

For this purpose, we construct a stylized model for generating training data with potentially biased features as well as potentially biased labels and quantify the extent of bias that is learned by the model when the user responds in a biased manner as in many real-world scenarios.

Cloud Computing Decision Making +2

Fair Bayesian Optimization

no code implementations9 Jun 2020 Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau

Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.

Bayesian Optimization Fairness

Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

no code implementations30 Apr 2019 Sahin Cem Geyik, Stuart Ambler, Krishnaram Kenthapadi

We finally present the online A/B testing results from applying our framework towards representative ranking in LinkedIn Talent Search, and discuss the lessons learned in practice.

Fairness Recommendation Systems +1

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

no code implementations NAACL 2019 Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai

In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.

Word Embeddings

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting

4 code implementations27 Jan 2019 Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai

We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives.

Classification General Classification

PriPeARL: A Framework for Privacy-Preserving Analytics and Reporting at LinkedIn

no code implementations20 Sep 2018 Krishnaram Kenthapadi, Thanh T. L. Tran

Preserving privacy of users is a key requirement of web-scale analytics and reporting applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR.

Privacy Preserving

Talent Search and Recommendation Systems at LinkedIn: Practical Challenges and Lessons Learned

no code implementations18 Sep 2018 Sahin Cem Geyik, Qi Guo, Bo Hu, Cagri Ozcaglar, Ketan Thakkar, Xianren Wu, Krishnaram Kenthapadi

LinkedIn Talent Solutions business contributes to around 65% of LinkedIn's annual revenue, and provides tools for job providers to reach out to potential candidates and for job seekers to find suitable career opportunities.

Information Retrieval Recommendation Systems +1

Bringing Salary Transparency to the World: Computing Robust Compensation Insights via LinkedIn Salary

no code implementations29 Mar 2017 Krishnaram Kenthapadi, Stuart Ambler, Liang Zhang, Deepak Agarwal

The recently launched LinkedIn Salary product has been designed with the goal of providing compensation insights to the world's professionals and thereby helping them optimize their earning potential.

Cannot find the paper you are looking for? You can Submit a new open access paper.