bias_auc / README.md
tybrs's picture
Update README.md
1a25027 verified
|
raw
history blame
1.09 kB
metadata
title: Bias AUC
emoji: 🏆
colorFrom: gray
colorTo: blue
sdk: gradio
pinned: false
license: apache-2.0

Bias AUC

Description

Suite of threshold-agnostic metrics that provide a nuanced view of this unintended bias, by considering the various ways that a classifier’s score distribution can vary across designated groups.

The following are computed:

  • Subgroup AUC
  • BPSN (Background Positive, Subgroup Negative) AUC
  • BNSP (Background Negative, Subgroup Positive) AUC
  • GMB (Generalized Mean of Bias) AUC

How to use

from evaluate import load

target = [['Islam'],
 ['Sexuality'],
 ['Sexuality'],
 ['Islam']]

label = [0, 0, 1, 1]

output = [[0.44452348351478577, 0.5554765462875366],
 [0.4341845214366913, 0.5658154487609863],
 [0.400595098733902, 0.5994048714637756],
 [0.3840397894382477, 0.6159601807594299]]

metric = load('Intel/bias_auc')

metric.add_batch(target=target,
                 label=label,
                 output=output)

metric.compute(target=a,
                 label=b,
                 output=c,
              subgroups = None)