fairlearn.preprocessing.PrototypeRepresentationLearner#

class fairlearn.preprocessing.PrototypeRepresentationLearner(n_prototypes=2, reconstruct_weight=1.0, target_weight=1.0, fairness_weight=1.0, random_state=None, tol=1e-06, max_iter=1000)[source]#

A transformer and classifier that learns a latent representation of the input data to obfuscate the sensitive features while preserving the classification and reconstruction performance.

The model minimizes a loss function that consists of three terms: the reconstruction error, the classification error, and an approximation of the demographic parity difference.

Read more in the User Guide.

Parameters:
n_prototypesint, default=2

Number of prototypes to use in the latent representation.

reconstruct_weightfloat, default=1.0

Weight of the reconstruction error term in the objective function.

target_weightfloat, default=1.0

Weight of the classification error term in the objective function.

fairness_weightfloat, default=1.0

Weight of the fairness error term in the objective function.

random_stateint, np.random.RandomState, or None, default=None

Seed or random number generator for reproducibility.

tolfloat, default=1e-6

Convergence tolerance for the optimization algorithm.

max_iterint, default=1000

Maximum number of iterations for the optimization algorithm.

Attributes:
n_prototypesint

Number of prototypes to use in the latent representation.

reconstruct_weightfloat

Weight of the reconstruction error term in the objective function.

target_weightfloat

Weight of the classification error term in the objective function.

fairness_weightfloat

Weight of the fairness error term in the objective function.

random_stateint, np.random.RandomState, or None

Seed or random number generator for reproducibility.

tolfloat

Tolerance for the optimization algorithm.

max_iterint

Maximum number of iterations for the optimization algorithm.

coef_np.ndarray

Coefficients of the learned model.

n_iter_int

Number of iterations run by the optimization algorithm.

n_features_in_int

Number of features in the input data.

classes_np.ndarray or None

Unique classes in the target variable. Only set if target labels are provided during fitting, otherwise None.

Notes

The PrototypeRepresentationLearner implements the algorithms intoduced in Zemel et al. [1].

If no sensitive features are provided during fitting, the loss function will not include the fairness error term.

If no target labels are provided during fitting, the loss function will not include the classification error term and the model will not be able to predict probabilities or labels.

References

Examples

>>> import numpy as np
>>> from fairlearn.preprocessing import PrototypeRepresentationLearner
>>> X = np.array([[0, 1], [1, 0], [0, 0], [1, 1]])
>>> y = np.array([0, 1, 0, 1])
>>> sensitive_features = np.array([0, 0, 1, 1])
>>> prl = PrototypeRepresentationLearner(n_prototypes=2, random_state=42)
>>> prl.fit(X, y, sensitive_features=sensitive_features)
PrototypeRepresentationLearner(random_state=42)
>>> X_transformed = prl.transform(X)
>>> y_pred = prl.predict(X)
fit(X, y=None, *, sensitive_features=None)[source]#

Fit the Prototype Representation Learner to the provided data.

Return type:

PrototypeRepresentationLearner

Parameters:
Xarray-like of shape (n_samples, n_features)

The input samples.

yarray-like of shape (n_samples,) or None, default=None

The target values.

sensitive_featuresarray-like or None, default=None

Sensitive features to be considered whose groups will be used to promote demographic parity. If None, the fairness error term will not be included in the loss function.

Returns:
selfPrototypeRepresentationLearner

Returns the fitted instance.

fit_transform(X, y=None, **fit_params)[source]#

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns:
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_metadata_routing()[source]#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

predict(X)[source]#

Predict the labels for the given input data.

Return type:

ndarray

Parameters:
Xarray-like of shape (n_samples, n_features)

The input data to predict.

Returns:
np.ndarray

The predicted labels for the input data.

Raises:
NotFittedError

If the estimator is not fitted yet.

ValueError

If no labels were provided during fitting.

predict_proba(X)[source]#

Predict class probabilities for the input samples X.

Return type:

ndarray

Parameters:
Xarray-like of shape (n_samples, n_features)

The input samples.

Returns:
np.ndarray of shape (n_samples, 2)

The class probabilities of the input samples. The first column represents the probability of the negative class, and the second column represents the probability of the positive class.

Raises:
NotFittedError

If the estimator is not fitted yet.

ValueError

If no labels were provided during fitting.

score(X, y, sample_weight=None)[source]#

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
Xarray-like of shape (n_samples, n_features)

Test samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs)

True labels for X.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:
scorefloat

Mean accuracy of self.predict(X) w.r.t. y.

set_fit_request(*, sensitive_features: bool | None | str = '$UNCHANGED$') PrototypeRepresentationLearner[source]#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
sensitive_featuresstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sensitive_features parameter in fit.

Returns:
selfobject

The updated object.

set_output(*, transform=None)[source]#

Set output container.

See Introducing the set_output API for an example on how to use the API.

Parameters:
transform{“default”, “pandas”, “polars”}, default=None

Configure output of transform and fit_transform.

  • “default”: Default output format of a transformer

  • “pandas”: DataFrame output

  • “polars”: Polars output

  • None: Transform configuration is unchanged

New in version 1.4: “polars” option was added.

Returns:
selfestimator instance

Estimator instance.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') PrototypeRepresentationLearner[source]#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns:
selfobject

The updated object.

transform(X)[source]#

Transform the input data X using the learned prototype representation. Each sample is transformed to its associated learned latent mapping, i.e. the softmax of its negative distance to the prototypes.

Return type:

ndarray

Parameters:
Xarray-like of shape (n_samples, n_features)

The input data to transform.

Returns:
np.ndarray

The transformed data.

Notes

This method checks if the model is fitted, validates the input data, and then applies the learned prototype representation.

property alpha_: ndarray#
classes_: ndarray | None#
coef_: ndarray#
fairness_weight: float#
max_iter: int#
n_features_in_: int#
n_iter_: int#
n_prototypes: int#
property prototypes_: ndarray#
random_state: int | RandomState | None#
reconstruct_weight: float#
target_weight: float#
tol: float#