API Docs#
fairlearn.datasets#
This module contains datasets that can be used for benchmarking and education.
Load the ACS Income dataset (regression). 

Load the UCI Adult dataset (binary classification). 

Load the UCI bank marketing dataset (binary classification). 

Load the boston housing dataset (regression). 

Load the 'Default of Credit Card clients' dataset (binary classification). 

Load the preprocessed Diabetes 130Hospitals dataset (binary classification). 
fairlearn.metrics#
Functionality for computing metrics, with a particular focus on disaggregated metrics.
For our purpose, a metric is a function with signature
f(y_true, y_pred, ....)
where y_true
are the set of true values and y_pred
are
values predicted by a machine learning algorithm. Other
arguments may be present (most often sample weights), which will
affect how the metric is calculated.
This module provides the concept of a disaggregated metric.
This is a metric where in addition to y_true
and y_pred
values, the user provides information about group membership
for each sample.
For example, a user could provide a ‘Gender’ column, and the
disaggregated metric would contain separate results for the subgroups
‘male’, ‘female’ and ‘nonbinary’ indicated by that column.
The underlying metric function is evaluated for each of these three
subgroups.
This extends to multiple grouping columns, calculating the metric
for each combination of subgroups.
Calculate the number of data points in each group when working with MetricFrame. 

Calculate the demographic parity difference. 

Calculate the demographic parity ratio. 

Calculate the equalized odds difference. 

Calculate the equalized odds ratio. 

Calculate the false negative rate (also called miss rate). 

Calculate the false positive rate (also called fallout). 

Create a scalar returning metric function based on aggregation of a disaggregated metric. 

Calculate the (weighted) mean prediction. 

Create a scatter plot comparing multiple models along two metrics. 

Calculate the fraction of predicted labels matching the 'good' outcome. 

Calculate the true negative rate (also called specificity or selectivity). 

Calculate the true positive rate (also called sensitivity, recall, or hit rate). 
Collection of disaggregated metric values. 
fairlearn.postprocessing#
This module contains methods which operate on a predictor, rather than an estimator.
The predictor’s output is adjusted to fulfill specified parity constraints. The postprocessors learn how to adjust the predictor’s output from the training data.
A classifier based on the threshold optimization approach. 
Plot the chosen solution of the threshold optimizer. 
fairlearn.preprocessing#
Preprocessing tools to help deal with sensitive attributes.
A component that filters out sensitive correlations in a dataset. 
fairlearn.reductions#
This module contains algorithms implementing the reductions approach to disparity mitigation.
In this approach, disparity constraints are cast as Lagrange multipliers, which cause the reweighting and relabelling of the input data. This reduces the problem back to standard machine learning training.
Class to evaluate absolute loss. 

Moment for constraining the worstcase loss by a group. 

Moment that can be expressed as weighted classification error. 

Implementation of demographic parity as a moment. 

Implementation of equalized odds as a moment. 

Misclassification error as a moment. 

Implementation of error rate parity as a moment. 

An Estimator which implements the exponentiated gradient reduction. 

Implementation of true positive rate parity as a moment. 

Implementation of false positive rate parity as a moment. 

A generic moment for parity in utilities (or costs) under classification. 

Estimator to perform a grid search given a blackbox estimator algorithm. 

Moment that can be expressed as weighted loss. 

Generic moment. 

Moment for constraining the worstcase loss by a group. 

Class to evaluate the square loss. 

Class to evaluate a zeroone loss. 
fairlearn.adversarial#
Adversarial techniques to help mitigate unfairness.
Train PyTorch or TensorFlow classifiers while mitigating unfairness. 

Train PyTorch or TensorFlow regressors while mitigating unfairness. 
fairlearn.experimental#
Enables experimental functionality that may be migrated to other modules at a later point.
Warning
Anything can break from version to version without further warning.
Visualization for metrics with and without confidence intervals. 