A Bayesian Evaluation Framework for Subjectively Annotated Visual Recognition Tasks

20 Jun 2020  ·  Derek S. Prijatelj, Mel McCurrie, Walter J. Scheirer ·

An interesting development in automatic visual recognition has been the emergence of tasks where it is not possible to assign objective labels to images, yet still feasible to collect annotations that reflect human judgements about them. Machine learning-based predictors for these tasks rely on supervised training that models the behavior of the annotators, i.e., what would the average person's judgement be for an image? A key open question for this type of work, especially for applications where inconsistency with human behavior can lead to ethical lapses, is how to evaluate the epistemic uncertainty of trained predictors, i.e., the uncertainty that comes from the predictor's model. We propose a Bayesian framework for evaluating black box predictors in this regime, agnostic to the predictor's internal structure. The framework specifies how to estimate the epistemic uncertainty that comes from the predictor with respect to human labels by approximating a conditional distribution and producing a credible interval for the predictions and their measures of performance. The framework is successfully applied to four image classification tasks that use subjective human judgements: facial beauty assessment, social attribute assignment, apparent age estimation, and ambiguous scene labeling.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here