Think fairness.
Build for everyone.

A toolkit to assess and improve the fairness of machine learning models.

    Use common fairness metrics and an interactive dashboard to assess which groups of people may be negatively impacted.

    Use state-of-the-art algorithms to mitigate unfairness in your classification and regression models.

Why Fairlearn?

Fairlearn provides developers and data scientists with capabilities to assess the fairness of their machine learning models and mitigate unfairness. Assess existing models and train new models with fairness in mind. Compare models and make trade-offs between fairness and model performance.

Choice and Flexibility

Wide selection of fairness metrics and state-of-the-art mitigation algorithms

Broad Coverage

Open API that’s easy to access and supports standard machine learning algorithms

Interactive Visualizations

Assess unfairness and compare multiple models for side-by-side analysis

How Fairlearn Works

Fairness in Fairlearn

There are many ways that an AI system can behave unfairly. Fairlearn focuses on negative impacts for groups of people, such as those defined in terms of race, gender, age, or disability status.

Quality of service

For example, a voice recognition system might fail to work as well for some groups of people as it does for others.


For example, a system for screening loan or job applications might be better at picking good candidates among some groups of people than among others.

Assess Fairness

1. Select the fairness metric

For common machine learning tasks:

2. Assess the fairness of your model

  • Use Fairlearn's interactive dashboard to assess the fairness of a single model and to compare multiple models in terms of their impacts on different groups of people
  • Mitigate Unfairness

    1. Select the fairness metric

    2. Select the mitigation algorithm:

      Use algorithmic techniques to convert a standard machine learning algorithm into one that optimizes performance under fairness constraints

      For existing models, find output transformation that optimizes performance under fairness constraints

    3. Make trade-offs between fairness and performance

  • See how model performance varies based on different fairness metrics
  • Compare multiple models side-by-side in an interactive dashboard and select the desired model
  • Fairlearn Usage Scenarios

    Data Scientists and Machine Learning Developers

    Assess model fairness during training and deployment

    Compare models and make trade-offs between fairness and performance

    Model Evaluators

    Assess the impact of a model on specific groups of people

    Business Leaders

    Understand the impact of AI and machine learning systems on customers and other stakeholders

    Build trust with customers and other stakeholders

    Students and Researchers

    Benchmark mitigation algorithms and contribute new ones

    Getting Started

    Install Fairlearn

    Contribute to Fairlearn

    We encourage you to join the effort and contribute feedback, metrics, algorithms, visualizations, ideas and more, so we can evolve the toolkit together!