fairlearn.reductions.ErrorRateParity#

class fairlearn.reductions.ErrorRateParity(*, difference_bound=None, ratio_bound=None, ratio_bound_slack=0.0)[source]#

Implementation of error rate parity as a moment.

A classifier \(h(X)\) satisfies error rate parity if

\[P[h(X) \ne Y | A = a] = P[h(X) \ne Y] \; \forall a\]

This implementation of UtilityParity defines a single event, all. Consequently, the prob_event pandas.Series will only have a single entry, which will be equal to 1.

The index property will have twice as many entries (corresponding to the Lagrange multipliers for positive and negative constraints) as there are unique values for the sensitive feature.

The UtilityParity.signed_weights() method will compute the costs according to Example 3 of Agarwal et al.1. However, in this scenario, g = abs(h(x)-y), rather than g = h(x)

This Moment also supports control features, which can be used to stratify the data, with the constraint applied within each stratum, but not between strata.

Read more in the User Guide.

Attributes
total_samples

Return the number of samples in the data.

Methods

bound()

Return bound vector.

default_objective()

Return the default objective for moments of this kind.

gamma(predictor)

Calculate the degree to which constraints are currently violated by the predictor.

load_data(X, y, *, sensitive_features[, ...])

Load the specified data into the object.

project_lambda(lambda_vec)

Return the projected lambda values.

signed_weights(lambda_vec)

Compute the signed weights.