fairlearn.metrics.MetricFrame#
- class fairlearn.metrics.MetricFrame(*, metrics, y_true, y_pred, sensitive_features, control_features=None, sample_params=None)[source]#
Collection of disaggregated metric values.
This data structure stores and manipulates disaggregated values for any number of underlying metrics. At least one sensitive feature must be supplied, which is used to split the data into subgroups. The underlying metric(s) is(are) calculated across the entire dataset (made available by the
overall
property) and for each identified subgroup (made available by theby_group
property).The only limitations placed on the metric functions are that:
The first two arguments they take must be
y_true
andy_pred
arraysAny other arguments must correspond to sample properties (such as sample weights), meaning that their first dimension is the same as that of y_true and y_pred. These arguments will be split up along with the
y_true
andy_pred
arrays
The interpretation of the
y_true
andy_pred
arrays is up to the underlying metric - it is perfectly possible to pass in lists of class probability tuples. We also support non-scalar return types for the metric function (such as confusion matrices) at the current time. However, the aggregation functions will not be well defined in this case.Group fairness metrics are obtained by methods that implement various aggregators over group-level metrics, such as the maximum, minimum, or the worst-case difference or ratio.
This data structure also supports the concept of ‘control features.’ Like the sensitive features, control features identify subgroups within the data, but aggregations are not performed over the control features. Instead, the aggregations produce a result for each subgroup identified by the control feature(s). The name ‘control features’ refers to the statistical practice of ‘controlling’ for a variable.
Read more in the User Guide.
- Parameters
metrics (callable or dict) –
The underlying metric functions which are to be calculated. This can either be a single metric function or a dictionary of functions. These functions must be callable as
fn(y_true, y_pred, **sample_params)
. If there are any other arguments required (such asbeta
forsklearn.metrics.fbeta_score()
) thenfunctools.partial()
must be used.Note that the values returned by various members of the class change based on whether this argument is a callable or a dictionary of callables. This distinction remains even if the dictionary only contains a single entry.
y_true (List, pandas.Series, numpy.ndarray, pandas.DataFrame) – The ground-truth labels (for classification) or target values (for regression).
y_pred (List, pandas.Series, numpy.ndarray, pandas.DataFrame) – The predictions.
sensitive_features (List, pandas.Series, dict of 1d arrays, numpy.ndarray, pandas.DataFrame) – The sensitive features which should be used to create the subgroups. At least one sensitive feature must be provided. All names (whether on pandas objects or dictionary keys) must be strings. We also forbid DataFrames with column names of
None
. For cases where no names are provided we generate namessensitive_feature_[n]
.control_features (List, pandas.Series, dict of 1d arrays, numpy.ndarray, pandas.DataFrame) –
Control features are similar to sensitive features, in that they divide the input data into subgroups. Unlike the sensitive features, aggregations are not performed across the control features - for example, the
overall
property will have one value for each subgroup in the control feature(s), rather than a single value for the entire data set. Control features can be specified similarly to the sensitive features. However, their default names (if none can be identified in the input values) are of the formatcontrol_feature_[n]
. See the section on intersecting groups in the User Guide to learn how to use control levels.Note the types returned by members of the class vary based on whether control features are present.
sample_params (dict) – Parameters for the metric function(s). If there is only one metric function, then this is a dictionary of strings and array-like objects, which are split alongside the
y_true
andy_pred
arrays, and passed to the metric function. If there are multiple metric functions (passed as a dictionary), then this is a nested dictionary, with the first set of string keys identifying the metric function name, with the values being the string-to-array-like dictionaries.metric (callable or dict) –
The underlying metric functions which are to be calculated. This can either be a single metric function or a dictionary of functions. These functions must be callable as
fn(y_true, y_pred, **sample_params)
. If there are any other arguments required (such asbeta
forsklearn.metrics.fbeta_score()
) thenfunctools.partial()
must be used.Deprecated since version 0.7.0: metric will be removed in version 0.10.0, use metrics instead.
Examples
We will now go through some simple examples (see the User Guide for a more in-depth discussion):
>>> from fairlearn.metrics import MetricFrame, selection_rate >>> from sklearn.metrics import accuracy_score >>> import pandas as pd >>> y_true = [1,1,1,1,1,0,0,1,1,0] >>> y_pred = [0,1,1,1,1,0,0,0,1,1] >>> sex = ['Female']*5 + ['Male']*5 >>> metrics = {"selection_rate": selection_rate} >>> mf1 = MetricFrame( ... metrics=metrics, ... y_true=y_true, ... y_pred=y_pred, ... sensitive_features=sex)
Access the disaggregated metrics via a pandas Series
>>> mf1.by_group selection_rate sensitive_feature_0 Female 0.8 Male 0.4
Access the largest difference, smallest ratio, and worst case performance
>>> print(f"difference: {mf1.difference()[0]:.3} " ... f"ratio: {mf1.ratio()[0]:.3} " ... f"max across groups: {mf1.group_max()[0]:.3}") difference: 0.4 ratio: 0.5 max across groups: 0.8
You can also evaluate multiple metrics by providing a dictionary
>>> metrics_dict = {"accuracy":accuracy_score, "selection_rate": selection_rate} >>> mf2 = MetricFrame( ... metrics=metrics_dict, ... y_true=y_true, ... y_pred=y_pred, ... sensitive_features=sex)
Access the disaggregated metrics via a pandas DataFrame
>>> mf2.by_group accuracy selection_rate sensitive_feature_0 Female 0.8 0.8 Male 0.6 0.4
The largest difference, smallest ratio, and the maximum and minimum values across the groups are then all pandas Series, for example:
>>> mf2.difference() accuracy 0.2 selection_rate 0.4 dtype: float64
You’ll probably want to view them transposed
>>> pd.DataFrame({'difference': mf2.difference(), ... 'ratio': mf2.ratio(), ... 'group_min': mf2.group_min(), ... 'group_max': mf2.group_max()}).T accuracy selection_rate difference 0.2 0.4 ratio 0.75 0.5 group_min 0.6 0.4 group_max 0.8 0.8
More information about plotting metrics can be found in the plotting section of the User Guide.
- Attributes
by_group
Return the collection of metrics evaluated for each subgroup.
control_levels
Return a list of feature names which are produced by control features.
overall
Return the underlying metrics evaluated on the whole dataset.
sensitive_levels
Return a list of the feature names which are produced by sensitive features.
Methods
difference
([method, errors])Return the maximum absolute difference between groups for each metric.
group_max
([errors])Return the maximum value of the metric over the sensitive features.
group_min
([errors])Return the maximum value of the metric over the sensitive features.
ratio
([method, errors])Return the minimum ratio between groups for each metric.