- An introduction to Explainable AI (XAI) and Explainable Boosting Machines (EBM) - Jun 16, 2021.
Understanding why your AI-based models make the decisions they do is crucial for deploying practical solutions in the real-world. Here, we review some techniques in the field of Explainable AI (XAI), why explainability is important, example models of explainable AI using LIME and SHAP, and demonstrate how Explainable Boosting Machines (EBMs) can make explainability even easier.
AI, Deep Learning, Explainability, Gradient Boosting, Interpretability, LIME, Machine Learning, SHAP
- Machine Learning Model Interpretation - Jun 2, 2021.
Read this overview of using Skater to build machine learning visualizations.
Explainability, Interpretability, Machine Learning, Python
- The Explainable Boosting Machine - May 13, 2021.
As accurate as gradient boosting, as interpretable as linear regression.
Decision Trees, Explainability, Gradient Boosting, Interpretability, Machine Learning
- Interpretable Machine Learning: The Free eBook - Apr 9, 2021.
Interested in learning more about interpretability in machine learning? Check out this free eBook to learn about the basics, simple interpretable models, and strategies for interpreting more complex black box models.
AI, Explainability, Explainable AI, Free ebook, Interpretability
- Shapash: Making Machine Learning Models Understandable - Apr 2, 2021.
Establishing an expectation for trust around AI technologies may soon become one of the most important skills provided by Data Scientists. Significant research investments are underway in this area, and new tools are being developed, such as Shapash, an open-source Python library that helps Data Scientists make machine learning models more transparent and understandable.
Explainability, Machine Learning, Python, SHAP
- Adversarial Attacks on Explainable AI - Feb 9, 2021.
Are explainability methods black-box themselves?
Adversarial, AI, Explainability, Explainable AI
- Deep learning doesn’t need to be a black box - Feb 5, 2021.
The cultural perception of AI is often suspect because of the described challenges in knowing why a deep neural network makes its predictions. So, researchers try to crack open this "black box" after a network is trained to correlate results with inputs. But, what if the goal of explainability could be designed into the network's architecture -- before the model is trained and without reducing its predictive power? Maybe the box could stay open from the beginning.
Convolutional Neural Networks, Deep Learning, Explainability, Explainable AI, Image Recognition
- AI registers: finally, a tool to increase transparency in AI/ML - Dec 9, 2020.
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why they are important and what you need to do, no tools have existed until now.
AI, Bias, Ethics, Explainability, Helsinki, Machine Learning, Trust
- tensorflow + dalex = :) , or how to explain a TensorFlow model - Nov 13, 2020.
Having a machine learning model that generates interesting predictions is one thing. Understanding why it makes these predictions is another. For a tensorflow predictive model, it can be straightforward and convenient develop an explainable AI by leveraging the dalex Python package.
Dalex, Explainability, Explainable AI, Machine Learning, Python, TensorFlow
- Interpretability, Explainability, and Machine Learning – What Data Scientists Need to Know - Nov 4, 2020.
The terms “interpretability,” “explainability” and “black box” are tossed about a lot in the context of machine learning, but what do they really mean, and why do they matter?
Explainability, Explainable AI, Interpretability, Machine Learning
- Explaining the Explainable AI: A 2-Stage Approach - Oct 29, 2020.
Understanding how to build AI models is one thing. Understanding why AI models provide the results they provide is another. Even more so, explaining any type of understanding of AI models to humans is yet another challenging layer that must be addressed if we are to develop a complete approach to Explainable AI.
AI, Explainability, Explainable AI, XAI
- Explainable and Reproducible Machine Learning Model Development with DALEX and Neptune - Aug 27, 2020.
With ML models serving real people, misclassified cases (which are a natural consequence of using ML) are affecting peoples’ lives and sometimes treating them very unfairly. It makes the ability to explain your models’ predictions a requirement rather than just a nice to have.
Dalex, Explainability, Explainable AI, Interpretability, Python, SHAP
- modelStudio and The Grammar of Interactive Explanatory Model Analysis - Jun 19, 2020.
modelStudio is an R package that automates the exploration of ML models and allows for interactive examination. It works in a model agnostic fashion, therefore is compatible with most of the ML frameworks.
Analysis, Explainability, Interpretability, Machine Learning, R
- Evidence Counterfactuals for explaining predictive models on Big Data - May 18, 2020.
Big Data generated by people -- such as, social media posts, mobile phone GPS locations, and browsing history -- provide enormous prediction value for AI systems. However, explaining how these models predict with the data remains challenging. This interesting explanation approach considers how a model would behave if it didn't have the original set of data to work with.
Big Data, Explainability, Predictive Modeling, Predictive Models, Statistics
- Explaining “Blackbox” Machine Learning Models: Practical Application of SHAP - May 6, 2020.
Train a "blackbox" GBM model on a real dataset and make it explainable with SHAP.
Explainability, Interpretability, Python, SHAP
- 20 AI, Data Science, Machine Learning Terms You Need to Know in 2020 (Part 2) - Mar 2, 2020.
We explain important AI, ML, Data Science terms you should know in 2020, including Double Descent, Ethics in AI, Explainability (Explainable AI), Full Stack Data Science, Geospatial, GPT-2, NLG (Natural Language Generation), PyTorch, Reinforcement Learning, and Transformer Architecture.
AI, Data Science, Explainability, Geospatial, GPT-2, Key Terms, Machine Learning, Natural Language Generation, Reinforcement Learning, Transformer
- Observability for Data Engineering - Feb 10, 2020.
Going beyond traditional monitoring techniques and goals, understanding if a system is working as intended requires a new concept in DevOps, called Observability. Learn more about this essential approach to bring more context to your system metrics.
Data Engineering, DevOps, Explainability, KPI, Monitoring, Time Series
- Explaining Black Box Models: Ensemble and Deep Learning Using LIME and SHAP - Jan 21, 2020.
This article will demonstrate explainability on the decisions made by LightGBM and Keras models in classifying a transaction for fraudulence, using two state of the art open source explainability techniques, LIME and SHAP.
Deep Learning, Ensemble Methods, Explainability, LIME, SHAP
- Introducing Generalized Integrated Gradients (GIG): A Practical Method for Explaining Diverse Ensemble Machine Learning Models - Jan 7, 2020.
There is a need for a new way to explain complex, ensembled ML models for high-stakes applications such as credit and lending. This is why we invented GIG.
Ensemble Methods, Explainability, Machine Learning
- Google’s New Explainable AI Service - Dec 20, 2019.
Google has started offering a new service for “explainable AI” or XAI, as it is fashionably called. Presently offered tools are modest, but the intent is in the right direction.
AI, Explainability, Explainable AI, Google
- Interpretability part 3: opening the black box with LIME and SHAP - Dec 19, 2019.
The third part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers methods that try to explain each prediction instead of establishing a global explanation.
Explainability, Interpretability, LIME, SHAP
- Interpretability: Cracking open the black box, Part 2 - Dec 11, 2019.
The second part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers post-hoc interpretation that is useful when the model is not transparent.
Explainability, Explainable AI, Feature Selection, Interpretability, Python
- 10 Free Top Notch Machine Learning Courses - Dec 6, 2019.
Are you interested in studying machine learning over the holidays? This collection of 10 free top notch courses will allow you to do just that, with something for every approach to improving your machine learning skills.
Books, Computer Vision, Courses, Deep Learning, Explainability, Graph Analytics, Interpretability, Machine Learning, NLP, Python
- Explainability: Cracking open the black box, Part 1 - Dec 4, 2019.
What is Explainability in AI and how can we leverage different techniques to open the black box of AI and peek inside? This practical guide offers a review and critique of the various techniques of interpretability.
Explainability, Explainable AI, Interpretability, XAI
- Introducing AI Explainability 360: A New Toolkit to Help You Understand what Machine Learning Models are Doing - Aug 27, 2019.
Recently, AI researchers from IBM open sourced AI Explainability 360, a new toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models.
AI, Explainability, Machine Learning, Modeling