fairlearn.reductions.ExponentiatedGradient#
- class fairlearn.reductions.ExponentiatedGradient(estimator, constraints, *, objective=None, eps=0.01, max_iter=50, nu=None, eta0=2.0, run_linprog_step=True, sample_weight_name='sample_weight')[source]#
- An Estimator which implements the exponentiated gradient reduction. - The exponentiated gradient algorithm is described in detail by Agarwal et al.[1]. - Read more in the User Guide. - Changed in version 0.3.0: Was a function before, not a class - Changed in version 0.4.6: Requires 0-1 labels for classification problems - Parameters:
- estimator (estimator) – An estimator implementing methods - fit(X, y, sample_weight)and- predict(X), where X is the matrix of features, y is the vector of labels (binary classification) or continuous values (regression), and sample_weight is a vector of weights. In binary classification labels y and predictions returned by- predict(X)are either 0 or 1. In regression values y and predictions are continuous.
- constraints (fairlearn.reductions.Moment) – The fairness constraints expressed as a - Moment.
- objective (fairlearn.reductions.Moment) – The objective expressed as a - Moment. The default is- ErrorRate()for binary classification and- MeanLoss(...)for regression.
- eps (float) – - Allowed fairness constraint violation; the solution is guaranteed to have the error within - 2*best_gapof the best error under constraint eps; the constraint violation is at most- 2*(eps+best_gap).- Changed in version 0.5.0: - epsis now only responsible for setting the L1 norm bound in the optimization
- max_iter (int) – - Maximum number of iterations - New in version 0.5.0: Used to be - T
- nu (float) – Convergence threshold for the duality gap, corresponding to a conservative automatic setting based on the statistical uncertainty in measuring classification error 
- eta0 (float) – - Initial setting of the learning rate - New in version 0.5.0: Used to be - eta_mul
- run_linprog_step (bool) – - if True each step of exponentiated gradient is followed by the saddle point optimization over the convex hull of classifiers returned so far; default True - New in version 0.5.0. 
- sample_weight_name (str) – - Name of the argument to estimator.fit() which supplies the sample weights (defaults to sample_weight) - New in version 0.5.0. 
 
 - Methods - fit(X, y, **kwargs)- Return a fair classifier under specified fairness constraints. - Get metadata routing of this object. - get_params([deep])- Get parameters for this estimator. - predict(X[, random_state])- Provide predictions for the given input data. - set_params(**params)- Set the parameters of this estimator. - set_predict_request(*[, random_state])- Request metadata passed to the - predictmethod.- fit(X, y, **kwargs)[source]#
- Return a fair classifier under specified fairness constraints. - Parameters:
- X (numpy.ndarray or pandas.DataFrame) – Feature data 
- y (numpy.ndarray, pandas.DataFrame, pandas.Series, or list) – Label vector 
 
 
 - get_metadata_routing()[source]#
- Get metadata routing of this object. - Please check User Guide on how the routing mechanism works. - Returns:
- routing – A - MetadataRequestencapsulating routing information.
- Return type:
- MetadataRequest 
 
 - predict(X, random_state=None)[source]#
- Provide predictions for the given input data. - Predictions are randomized, i.e., repeatedly calling predict with the same feature data may yield different output. This non-deterministic behavior is intended and stems from the nature of the exponentiated gradient algorithm. - Notes - A fitted ExponentiatedGradient has an attribute predictors_, an array of predictors, and an attribute weights_, an array of non-negative floats of the same length. The prediction on each data point in X is obtained by first picking a random predictor according to the probabilities in weights_ and then applying it. Different predictors can be chosen on different data points. - Parameters:
- X (numpy.ndarray or pandas.DataFrame) – Feature data 
- random_state (int or RandomState instance, default=None) – Controls random numbers used for randomized predictions. Pass an int for reproducible output across multiple function calls. 
 
- Returns:
- The prediction. If X represents the data for a single example the result will be a scalar. Otherwise the result will be a vector 
- Return type:
- Scalar or vector 
 
 - set_params(**params)[source]#
- Set the parameters of this estimator. - The method works on simple estimators as well as on nested objects (such as - Pipeline). The latter have parameters of the form- <component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **params (dict) – Estimator parameters. 
- Returns:
- self – Estimator instance. 
- Return type:
- estimator instance 
 
 - set_predict_request(*, random_state: bool | None | str = '$UNCHANGED$') ExponentiatedGradient[source]#
- Request metadata passed to the - predictmethod.- Note that this method is only relevant if - enable_metadata_routing=True(see- sklearn.set_config()). Please see User Guide on how the routing mechanism works.- The options for each parameter are: - True: metadata is requested, and passed to- predictif provided. The request is ignored if metadata is not provided.
- False: metadata is not requested and the meta-estimator will not pass it to- predict.
- None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
- str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
 - The default ( - sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.- New in version 1.3. - Note - This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a - Pipeline. Otherwise it has no effect.