Codecov test coverage R build status CRANDownloadsDrWhy-eXtrAI

Overview

Flexible tool for bias detection, visualization, and mitigation. Uses models explained with DALEX and calculates fairness metrics based on confusion matrix for protected group. Allows to compare and gain information about various machine learning models. Mitigate bias with various pre-processing and post-processing techniques. Make sure your models are classifying protected groups similarly.

Preview

preview

Installation

Install it from CRAN:

install.packages("fairmodels")

or developer version from GitHub:

devtools::install_github("ModelOriented/fairmodels")

How to evaluate fairness?

drawing

Fairness checking is flexible

fairness_check parameters are

  • x, … - explainers and fairness_objects (products of fairness_check).
  • protected - factor with different subgroups as levels. Usually specific race, sex etc…
  • privileged - subgroup, base on which to calculate parity loss metrics.
  • cutoff - custom cutoff, might be single value - cutoff same for all subgroups or vector - for each subgroup individually. Affecting only explainers.
  • label - character vector for every explainer.

Models might be trained on different data, even without protected variable. May have different cutoffs which gives different values of metrics. fairness_check() is place where explainers and fairness_objects are checked for compatibility and then glued together.
So it is possible to to something like this:

fairness_object <- fairness_check(explainer1, explainer2, ...)
fairness_object <- fairness_check(explainer3, explainer4, fairness_object, ...)

even with more fairness_objects!

If one is even more keen to know how fairmodels works and what are relations between objects, please look at this diagram class diagram

Metrics used

There are 12 metrics based on confusion matrix :

Metric Formula Full name fairness names while checking among subgroups
TPR tpr true positive rate equal opportunity
TNR tnr true negative rate
PPV ppv positive predictive value predictive parity
NPV npv negative predictive value
FNR fnr false negative rate
FPR fpr false positive rate predictive equality
FDR fdr false discovery rate
FOR for false omission rate
TS ts threat score
STP stp statistical parity statistical parity
ACC acc accuracy Overall accuracy equality
F1 f1 F1 score

and their parity loss.
How is parity loss calculated?

parity_loss

Where i denotes the membership to unique subgroup from protected variable. Unprivileged subgroups are represented by small letters and privileged by simply “privileged”.

some fairness metrics like Equalized odds are satisfied if parity loss in both TPR and FPR is low