Use common fairness metrics and an interactive dashboard to assess which groups of people may be negatively impacted.
Use state-of-the-art algorithms to mitigate unfairness in your classification and regression models.
Fairlearn provides developers and data scientists with capabilities to assess the fairness of their machine learning models and mitigate unfairness. Assess existing models and train new models with fairness in mind. Compare models and make trade-offs between fairness and model performance.
Wide selection of fairness metrics and state-of-the-art mitigation algorithms
Open API that’s easy to access and supports standard machine learning algorithms
Assess unfairness and compare multiple models for side-by-side analysis
There are many ways that an AI system can behave unfairly. Fairlearn focuses on negative impacts for groups of people, such as those defined in terms of race, gender, age, or disability status.
For example, a voice recognition system might fail to work as well for some groups of people as it does for others.
For example, a system for screening loan or job applications might be better at picking good candidates among some groups of people than among others.
Use algorithmic techniques to convert a standard machine learning algorithm into one that optimizes performance under fairness constraints
For existing models, find output transformation that optimizes performance under fairness constraints
Assess model fairness during training and deployment
Compare models and make trade-offs between fairness and performance
Assess the impact of a model on specific groups of people
Understand the impact of AI and machine learning systems on customers and other stakeholders
Build trust with customers and other stakeholders
Benchmark mitigation algorithms and contribute new ones