Open source Python frameworks and libraries for Explainable AI
AI Explainability 360
Interpretability and explainability of data and machine learning models
AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. AI Explainability 360 includes methods like: LIME, SHAP, CEM, and others.
Alibi Explain
Algorithms for explaining machine learning models
Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models. Alibi includes methods like: SHAP, ALE, Counterfactual and others.
Captum
Model interpretability and understanding for PyTorch
Captum is a model interpretability and understanding library for PyTorch. Captum contains general purpose implementations of integrated gradients, saliency maps, smoothgrad, vargrad and others for PyTorch models. It has quick integration for models built with domain-specific libraries such as torchvision, torchtext, and others. Captum includes methods like: LIME, SHAP, Integrated gradient, and others.
explainX
Explainable AI framework for explaining and debugging any blackbox machine learning
ExplainX is a model explainability/interpretability framework for data scientists and business users to understand overall model behavior, explain the "why" behind model predictions, remove biases and create convincing explanations. ExplainX includes methods like: SHAP, Linear and Tree models only for tabular data.
InterpretML
Model interpretability framework
InterpretML is an open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof. With this package, one can train interpretable glassbox models and explain blackbox systems. InterpretML helps you understand your models´ global behavior, or understand the reasons behind individual predictions. InterpretML includes methods like: LIME, SHAP, PDP, CEM, and others.
LIME
Explain the predictions of any ML classifier
LIME (local interpretable model-agnostic explanations) explains what machine learning classifiers or models are doing. LIME supports explaining individual predictions for text classifiers or classifiers that act on tables or images. This package is able to explain any black box classifier, with two or more classes. All is required that the classifier implements a function that takes in raw text or a numpy array and outputs a probability for each class. Support for scikit-learn classifiers is built-in.
OmniXAI
A framework for eXplainable AI
OmniXAI is a Python machine-learning library for explainable AI (XAI), offering omni-way explainable AI and interpretable machine learning capabilities to address many pain points in explaining decisions made by machine learning models in practice. OmniXAI aims to be a one-stop comprehensive library that makes explainable AI easy for data scientists, ML researchers and practitioners who need explanation for various types of data, models and explanation methods at different stages of ML process. OmniXA includes methods like: LIME, SHAP, PDP, ALE, and others.
OpenXAI
A transparent evaluation of model explanations
OpenXAI provides a comprehensive list of functions to systematically evaluate the quality of explanations generated by attribute-based explanation methods. OpenXAI supports the development of new datasets (both synthetic and real-world) and explanation methods, with a strong bent towards promoting systematic, reproducible, and transparent evaluation of explanation methods. OpenXAI is an open-source initiative that comprises of a collection of curated high-stakes datasets, models, and evaluation metrics, and provides a simple and easy-to-use API that enables researchers and practitioners to benchmark explanation methods using just a few lines of code.
SHAP
Game theoretic approach to explain ML models
SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions.