Integrate bias mitigation methods into VisualML
According to research and literature there are three main ways of mitigating bias when building models as part of the modeling process itself:
- Preprocessing: preprocessing data that goes into the model to reduce bias (for example reweighing)
- In-processing: taking fairness metrics into account when training a model and tuning hyperparameters
- Postprocessing: calibrating the output predictions in a way that reduces bias
The aif360 package by IBM for example offers multiple options for these.
I've been implementing these ideas into our own work and code base with things such as recipe and notebook templates and custom code snippets. It would be even more helpful if these things could be integrated into VisualML framework.
For example, it's very difficult or even not possible to implement custom code in VisualML in such a way that it respects cross validation folds and parameter tuning method (bayesian tuning for example). Integrating into the framework itself would solve these obstacles.
See also this overview of the aif360 package Getting Started — aif360 0.5.0 documentation