Search Results for author: Celestine Mendler-Dünner

Found 20 papers, 5 papers with code

Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists

no code implementations19 Mar 2024 Joachim Baumann, Celestine Mendler-Dünner

The success of the collective is measured by the increase in test-time recommendations of the targeted song.

Recommendation Systems

Performative Prediction: Past and Future

no code implementations25 Oct 2023 Moritz Hardt, Celestine Mendler-Dünner

A consequence of performative prediction is a natural equilibrium notion that gives rise to new optimization challenges.

Questioning the Survey Responses of Large Language Models

1 code implementation13 Jun 2023 Ricardo Dominguez-Olmedo, Moritz Hardt, Celestine Mendler-Dünner

As large language models increase in capability, researchers have started to conduct surveys of all kinds on these models in order to investigate the population represented by their responses.

Multiple-choice

Causal Inference out of Control: Estimating the Steerability of Consumption

no code implementations10 Feb 2023 Gary Cheng, Moritz Hardt, Celestine Mendler-Dünner

Regulators and academics are increasingly interested in the causal effect that algorithmic actions of a digital platform have on consumption.

Causal Inference Econometrics

Algorithmic Collective Action in Machine Learning

no code implementations8 Feb 2023 Moritz Hardt, Eric Mazumdar, Celestine Mendler-Dünner, Tijana Zrnic

We initiate a principled study of algorithmic collective action on digital platforms that deploy machine learning algorithms.

Language Modelling

Anticipating Performativity by Predicting from Predictions

no code implementations15 Aug 2022 Celestine Mendler-Dünner, Frances Ding, Yixin Wang

Predictions about people, such as their expected educational achievement or their credit risk, can be performative and shape the outcome that they aim to predict.

Performative Power

no code implementations31 Mar 2022 Moritz Hardt, Meena Jagadeesan, Celestine Mendler-Dünner

We introduce the notion of performative power, which measures the ability of a firm operating an algorithmic system, such as a digital content recommendation platform, to cause change in a population of participants.

Regret Minimization with Performative Feedback

no code implementations1 Feb 2022 Meena Jagadeesan, Tijana Zrnic, Celestine Mendler-Dünner

Our main contribution is an algorithm that achieves regret bounds scaling only with the complexity of the distribution shifts and not that of the reward function.

Alternative Microfoundations for Strategic Classification

no code implementations24 Jun 2021 Meena Jagadeesan, Celestine Mendler-Dünner, Moritz Hardt

When reasoning about strategic behavior in a machine learning context it is tempting to combine standard microfoundations of rational agents with the statistical decision theory underlying classification.

Binary Classification Classification +2

Test-time Collective Prediction

no code implementations NeurIPS 2021 Celestine Mendler-Dünner, Wenshuo Guo, Stephen Bates, Michael I. Jordan

An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points.

Revisiting Design Choices in Proximal Policy Optimization

1 code implementation23 Sep 2020 Chloe Ching-Yun Hsu, Celestine Mendler-Dünner, Moritz Hardt

We explain why standard design choices are problematic in these cases, and show that alternative choices of surrogate objectives and policy parameterizations can prevent the failure modes.

Randomized Block-Diagonal Preconditioning for Parallel Learning

no code implementations ICML 2020 Celestine Mendler-Dünner, Aurelien Lucchi

We study preconditioned gradient-based optimization methods where the preconditioning matrix has block-diagonal form.

Stochastic Optimization for Performative Prediction

1 code implementation NeurIPS 2020 Celestine Mendler-Dünner, Juan C. Perdomo, Tijana Zrnic, Moritz Hardt

In performative prediction, the choice of a model influences the distribution of future data, typically through actions taken based on the model's predictions.

Stochastic Optimization

Differentially Private Stochastic Coordinate Descent

no code implementations12 Jun 2020 Georgios Damaskinos, Celestine Mendler-Dünner, Rachid Guerraoui, Nikolaos Papandreou, Thomas Parnell

In this paper we tackle the challenge of making the stochastic coordinate descent algorithm differentially private.

Performative Prediction

2 code implementations ICML 2020 Juan C. Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, Moritz Hardt

When predictions support decisions they may influence the outcome they aim to predict.

SySCD: A System-Aware Parallel Coordinate Descent Algorithm

no code implementations NeurIPS 2019 Nikolas Ioannou, Celestine Mendler-Dünner, Thomas Parnell

In this paper we propose a novel parallel stochastic coordinate descent (SCD) algorithm with convergence guarantees that exhibits strong scalability.

Breadth-first, Depth-next Training of Random Forests

no code implementations15 Oct 2019 Andreea Anghel, Nikolas Ioannou, Thomas Parnell, Nikolaos Papandreou, Celestine Mendler-Dünner, Haris Pozidis

In this paper we analyze, evaluate, and improve the performance of training Random Forest (RF) models on modern CPU architectures.

Addressing Algorithmic Bottlenecks in Elastic Machine Learning with Chicle

no code implementations11 Sep 2019 Michael Kaufmann, Kornilios Kourtis, Celestine Mendler-Dünner, Adrian Schüpbach, Thomas Parnell

To address this, we propose Chicle, a new elastic distributed training framework which exploits the nature of machine learning algorithms to implement elasticity and load balancing without micro-tasks.

BIG-bench Machine Learning Fairness

On Linear Learning with Manycore Processors

1 code implementation2 May 2019 Eliza Wszola, Celestine Mendler-Dünner, Martin Jaggi, Markus Püschel

A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator.

Sampling Acquisition Functions for Batch Bayesian Optimization

no code implementations22 Mar 2019 Alessandro De Palma, Celestine Mendler-Dünner, Thomas Parnell, Andreea Anghel, Haralampos Pozidis

We present Acquisition Thompson Sampling (ATS), a novel technique for batch Bayesian Optimization (BO) based on the idea of sampling multiple acquisition functions from a stochastic process.

Bayesian Optimization Thompson Sampling

Cannot find the paper you are looking for? You can Submit a new open access paper.