FAIRNESS IS SOCIOTECHNICAL
Fairness of AI systems is about more than simply running lines of code.
In each use case, both societal and technical aspects
shape who might be harmed by AI systems and how.
There are many complex sources of unfairness and a variety of societal and technical processes for mitigation,
not just the mitigation algorithms in our library.
Throughout this website, you can find resources on how to think about fairness as sociotechnical,
and how to use Fairlearn's metrics and algorithms
while considering the AI system's broader societal context.
USE CASE | CREDIT-CARD LOANS
Assessment and mitigation of fairness issues in credit-card default models
When making a decision to approve or decline a loan, financial services organizations use a variety of models, including a model that predicts the applicant's probability of default.
These predictions are sometimes used to automatically reject or accept an application, directly impacting both the applicant and the organization.
In this scenario, fairness-related harms may arise when the model makes more mistakes for some groups of applicants compared to others.
We use Fairlearn to assess how different groups, defined in terms of their sex, are affected and how the observed disparities may be mitigated.