class fairlearn.reductions.EqualizedOdds(*, difference_bound=None, ratio_bound=None, ratio_bound_slack=0.0)[source]#

Implementation of equalized odds as a moment.

Adds conditioning on label compared to demographic parity, i.e.

\[P[h(X) = 1 | A = a, Y = y] = P[h(X) = 1 | Y = y] \; \forall a, y\]

This implementation of UtilityParity defines events corresponding to the unique values of the Y array.

The prob_event pandas.Series will record the fraction of the samples corresponding to each unique value in the Y array.

The index MultiIndex will have a number of entries equal to the number of unique values for the sensitive feature, multiplied by the number of unique values of the Y array, multiplied by two (for the Lagrange multipliers for positive and negative constraints).

With these definitions, the UtilityParity.signed_weights() method will calculate the costs according to Example 4 of Agarwal et al.[1].

This Moment also supports control features, which can be used to stratify the data, with the constraint applied within each stratum, but not between strata.

Read more in the User Guide.


Return bound vector.


a vector of bound values corresponding to all constraints


Return the default objective for moments of this kind.


Calculate the degree to which constraints are currently violated by the predictor.

load_data(X, y, *, sensitive_features, control_features=None)[source]#

Load the specified data into the object.


Return the projected lambda values.

i.e., returns lambda which is guaranteed to lead to the same or higher value of the Lagrangian compared with lambda_vec for all possible choices of the classifier, h.


Compute the signed weights.

Uses the equations for \(C_i^0\) and \(C_i^1\) as defined in Section 3.2 of Agarwal et al.[1] in the ‘best response of the Q-player’ subsection to compute the signed weights to be applied to the data by the next call to the underlying estimator.


The vector of Lagrange multipliers indexed by index

short_name = 'EqualizedOdds'#
property total_samples#

Return the number of samples in the data.