Towards Integrating Fairness Transparently in Industrial Applications

Numerous Machine Learning (ML) bias-related failures in recent years have led to scrutiny of how companies incorporate aspects of transparency and accountability in their ML lifecycles. Companies have a responsibility to monitor ML processes for bias and mitigate any bias detected, ensure business product integrity, preserve customer loyalty, and protect brand image. Challenges specific to industry ML projects can be broadly categorized into principled documentation, human oversight, and need for mechanisms that enable information reuse and improve cost efficiency. We highlight specific roadblocks and propose conceptual solutions on a per-category basis for ML practitioners and organizational subject matter experts. Our systematic approach tackles these challenges by integrating mechanized and human-in-the-loop components in bias detection, mitigation, and documentation of projects at various stages of the ML lifecycle. To motivate the implementation of our system -- SIFT (System to Integrate Fairness Transparently) -- we present its structural primitives with an example real-world use case on how it can be used to identify potential biases and determine appropriate mitigation strategies in a participatory manner.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here