fairlearn.metrics.equalized_odds_difference#
- fairlearn.metrics.equalized_odds_difference(y_true, y_pred, *, sensitive_features, method='between_groups', sample_weight=None, agg='worst_case')[source]#
Calculate the equalized odds difference.
The greater of two metrics: true_positive_rate_difference and false_positive_rate_difference. The former is the difference between the largest and smallest of \(P[h(X)=1 | A=a, Y=1]\), across all values \(a\) of the sensitive feature(s). The latter is defined similarly, but for \(P[h(X)=1 | A=a, Y=0]\). The equalized odds difference of 0 means that all groups have the same true positive, true negative, false positive, and false negative rates.
Read more in the User Guide.
- Return type:
- Parameters:
- y_truearray-like
Ground truth (correct) labels.
- y_predarray-like
Predicted labels \(h(X)\) returned by the classifier.
- sensitive_featuresarray-like
The sensitive features over which equalized odds should be assessed
- methodstring {‘between_groups’, ‘to_overall’}, default
between_groups
How to compute the differences. See
fairlearn.metrics.MetricFrame.difference()
for details.- sample_weightarray-like
The sample weights
- aggstring {‘worst_case’, ‘mean’}, default
worst_case
The aggregation method. One of “worst_case” or “mean”. If “worst_case”, the greater one of the false positive rate difference and true positive rate difference is returned. If “mean”, the mean of the differences is returned.
- Returns:
- float
The equalized odds difference