Tools for checking bias and discrimination in machine learning
Bias and fairness audit toolkit for machine learning
Aequitas is an Open source tool for bias and fairness auditing, developed at the Center for Data Science and Public Policy at University of Chicago. The tool allows for testing on a given dataset different bias and fairness measures, including demographic parity, proportional parity, false positive parity and false negative parity. Aequitas is developed in Python and the code is available on GitHub as a Python library. , A Web Audit Tool is available online. Given a dataset in input, the tool allows the user to select protected groups, fairness metrics and a threshold, and generates a Bias Report, with the attribute values that did not pass each fairness test.
AI Fairness 360
Toolkit to help detect and remove bias in machine learning models
AI Fairness 360 is an extensible open source toolkit developed by IBM. It provides a wide set of methods to detect and mitigate bias in the different stages of the machine learning pipeline, from training data, to classification algorithms to prediction. The toolkit includes bias metric and state of the art mitigation techniques appropriated for different circumstances and algorithms. The code is developed in Python and is available on GitHub.
Pymetrics Audit AI
Bias testing for generalized machine learning applications
Tool developed by the AI startup Pymetrics, released as open source. The tool allows for assessment of bias and discrimination in training data and machine learning algorithms. , The highest and lowest-passing groups for each demographic category are compared, and statistical tests are run to detect discrimination patterns. A variety of statistical tests and variety of techniques is implemented, including anova, 4/5th, fisher, z-test, bayes, group proportions at different thresholds. The Cochran-Mantel-Hanzel test is also implemented to check for differences over time or across different regions., The code of the library is developed in Python, built on top of pandas and sklearn libraries.
Testing software for discrimination
Themis is an Open source tool developed in Python at the University of Massachusetts. The tool allows testing software to assess the presence of group discrimination and causal discrimination., Given a schema for the input data, and a “discrimination threshold” as a parameter, the tool provides three different functionalities. The first one is generating a test suite and running tests to proof group or causal discrimination for a set of characteristics. The second functionality consists in computing sets of characteristics against which discrimination overcomes the threshold set by the user as a parameter. The third feature is computing apparent discrimination for a set of characteristics on input data or distribution given by the user. Various optimization techniques (caching, approximation, sampling, sound pruning) are used for making computation efficient.
Visual tool to test machine learning models
Google’s What-if tool (WIT) allows for experimenting with different machine learning models and fairness metrics in hypothetical situations, testing performance, analyzing the importance of different data features, and visualizing behavior across multiple models and subsets of input data., The tool provides intuitive visual interfaces to play with data and models, and can be used as an extension of Jupyter notebook. The code is developed in Python and is available on GitHub.
MediaFutures is funded by the European Union's Horizon 2020 Programme, under grant agreement number 951962. MediaFutures is a Europe-wide consortium. This website is managed on behalf of the consortium by Eurecat, whose main address is Carrer de Bilbao, 72, 08013 Barcelona (Spain).