fairlearn.adversarial.AdversarialFairnessRegressor#

class fairlearn.adversarial.AdversarialFairnessRegressor(*, backend='auto', predictor_model=[], adversary_model=[], predictor_optimizer='Adam', adversary_optimizer='Adam', constraints='demographic_parity', learning_rate=0.001, alpha=1.0, epochs=1, batch_size=32, shuffle=False, progress_updates=None, skip_validation=False, callbacks=None, cuda=None, warm_start=False, random_state=None)[source]#

Train PyTorch or TensorFlow regressors while mitigating unfairness.

This estimator implements the supervised learning method proposed in “Mitigating Unwanted Biases with Adversarial Learning”. [1] The training algorithm takes as input two neural network models, a predictor model and an adversarial model, defined either as a PyTorch module or TensorFlow2 model. The API follows conventions of sklearn estimators.

The regressor model takes the features X as input and seeks to predict y. The training loss is measured using the squared error.

The adversarial model for demographic parity takes scores produced by the predictor model as input, and seeks to predict sensitive_features. Depending on the type of the provided sensitive features, the model should produce a scalar or vector output. Three types of sensitive features are supported: (1) a single binary feature; (2) a single discrete feature; (3) one or multiple real-valued features. For a single binary sensitive feature and a single discrete feature, the network outputs are transformed by the logistic function and the softmax function, respectively, and the loss is the negative log likelihood. For one or multiple real-valued features, the network output is left as is, and the loss is a square loss.

The adversarial model for equalized odds additionaly takes y as input.

Parameters:
  • backend (str, BackendEngine, default = 'auto') – The backend to use. Must be one of 'torch', 'tensorflow', or 'auto' which indicates PyTorch, TensorFlow, or to automatically infer the backend from the predictor_model. You can also pass in a BackendEngine class.

  • predictor_model (list, torch.nn.Module, tf.keras.Model) – The predictor model to train. Instead of a neural network model, it is possible to pass a list \([k_1, k_2, \dots]\), where each \(k_i\) either indicates the number of nodes (if \(k_i\) is an integer) or an activation function (if \(k_i\) is a string) or a layer or activation function instance directly (if \(k_i\) is a callable). The default parameter is [], which indicates a neural network without any hidden layers. However, the number of nodes in the input and output layer are automatically inferred from data, and the final activation function (such as softmax for categorical predictors) are inferred from data. If backend is specified, you cannot pass a model that uses a different backend.

  • adversary_model (list, torch.nn.Module, tf.keras.Model) – The adversary model to train. Defined similarly as predictor_model. Must be the same type as the predictor_model.

  • predictor_optimizer (str, torch.optim, tensorflow.keras.optimizers, callable, default = 'Adam') – The optimizer class to use. If a string is passed instead, this must be either ‘SGD’ or ‘Adam’. A corresponding SGD or Adam optimizer is initialized with the given predictor model and learning rate. If an instance of a subclass of torch.optim.Optimizer or tensorflow.keras.optimizers.Optimizer is passed, this is used directly. If a callable fn is passed, we call this callable and pass our model, and set the result of this call as the optimizer, so: predictor_optimizer=fn(predictor_model).

  • adversary_optimizer (str, torch.optim, tensorflow.keras.optimizers, callable, default = 'Adam') – The optimizer class to use. Defined similarly as predictor_optimizer.

  • constraints (str, default = 'demographic_parity') – The fairness constraint. Must be either ‘demographic_parity’ or ‘equalized_odds’.

  • learning_rate (float, default = 0.001) – A small number greater than zero to set as a learning rate.

  • alpha (float, default = 1.0) – A small number \(\alpha\) as specified in the paper. It is the factor that balances the training towards predicting y (choose \(\alpha\) closer to zero) or enforcing fairness constraint (choose larger \(\alpha\)).

  • epochs (int, default = 1) – Number of epochs to train for.

  • batch_size (int, default = 32) – Batch size. For no batching, set this to -1.

  • shuffle (bool, default = False) – When true, shuffle the data before every epoch (including the first).

  • progress_updates (number, optional, default = None) – If a number \(t\) is provided, we print an update about the training loop after processing a batch and \(t\) seconds have passed since the previous update.

  • skip_validation (bool, default = False) – Skip the validation of the data. Useful because validate_input is a costly operation, and we may instead pass all data to validate_input at an earlier stage. Note that not only checking X is skipped, but also no tranform is applied to y and sensitive_features.

  • callbacks (callable) – Callback function, called after every batch. For instance useable when wanting to validate. A list of callback functions can also be provided. Each callback function is passed two arguments self (the estimator instance) and step (the completed iteration), and may return a Boolean value. If the returned value is True, the optimization algorithm terminates. This can be used to implement early stopping.

  • cuda (str, default = None) – A string to indicate which device to use when training. For instance, set cuda='cuda:0' to train on the first GPU. Only for PyTorch backend.

  • warm_start (bool, default = False) – Normally, when set to False, a call to fit() triggers reinitialization, which discards the models and intializes them again. Setting to True triggers reuse of these models. Note: if pre-initialized models are passed, the models (and their parameters) are never discarded.

  • random_state (int, RandomState, default = None) – Controls the randomized aspects of this algorithm, such as shuffling. Useful to get reproducible output across multiple function calls.

References

Methods

decision_function(X)

Compute predictor output for given test data.

fit(X, y, *[, sensitive_features])

Fit the model based on the given training data and sensitive features.

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

partial_fit(X, y, *[, sensitive_features])

Perform one epoch on given samples and update model.

predict(X)

Compute predictions for given test data.

score(X, y[, sample_weight])

Return the coefficient of determination of the prediction.

set_fit_request(*[, sensitive_features])

Request metadata passed to the fit method.

set_params(**params)

Set the parameters of this estimator.

set_partial_fit_request(*[, sensitive_features])

Request metadata passed to the partial_fit method.

set_score_request(*[, sample_weight])

Request metadata passed to the score method.

decision_function(X)[source]#

Compute predictor output for given test data.

Parameters:

X (numpy.ndarray) – Two-dimensional numpy array containing test data

Returns:

Y_pred – Two-dimensional array containing the model’s (soft-)predictions

Return type:

numpy.ndarray

fit(X, y, *, sensitive_features=None)[source]#

Fit the model based on the given training data and sensitive features.

Currently, for discrete y and sensitive_features ALL classes need to be passed in the first call to fit!

Parameters:
  • X (numpy.ndarray) – Two-dimensional numpy array containing training data

  • y (array) – Array-like containing training targets

  • sensitive_features (array) – Array-like containing the sensitive features of the training data.

get_metadata_routing()[source]#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routing – A MetadataRequest encapsulating routing information.

Return type:

MetadataRequest

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params – Parameter names mapped to their values.

Return type:

dict

partial_fit(X, y, *, sensitive_features=None)[source]#

Perform one epoch on given samples and update model.

Parameters:
  • X (numpy.ndarray) – Two-dimensional numpy array containing training data

  • y (array) – Array-like containing training targets

  • sensitive_features (array) – Array-like containing the sensitive feature of the training data.

predict(X)[source]#

Compute predictions for given test data.

Predictions are discrete for classifiers, making use of the predictor_function.

Parameters:

X (numpy.ndarray) – Two-dimensional numpy array containing test data

Returns:

Y_pred – array-like containing the model’s predictions fed through the (discrete) predictor_function

Return type:

array

score(X, y, sample_weight=None)[source]#

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score\(R^2\) of self.predict(X) w.r.t. y.

Return type:

float

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score(). This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_fit_request(*, sensitive_features: bool | None | str = '$UNCHANGED$') AdversarialFairnessRegressor[source]#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sensitive_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sensitive_features parameter in fit.

Returns:

self – The updated object.

Return type:

object

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:

**params (dict) – Estimator parameters.

Returns:

self – Estimator instance.

Return type:

estimator instance

set_partial_fit_request(*, sensitive_features: bool | None | str = '$UNCHANGED$') AdversarialFairnessRegressor[source]#

Request metadata passed to the partial_fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to partial_fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to partial_fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sensitive_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sensitive_features parameter in partial_fit.

Returns:

self – The updated object.

Return type:

object

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') AdversarialFairnessRegressor[source]#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

Returns:

self – The updated object.

Return type:

object