Human-Centric & Interpretable Machine Learning


A compilation of my articles on the topic of Machine learning Interpretability.

Source: Human-Centered Machine Learning by Jess Holbrook

Have you ever wondered why an important e-mail of yours was marked as spam, why Spotify keeps suggesting a particular song or why were you recommended a movie in horror genre on NetFlix?

DDI Editor’s Pick: 5 Machine Learning Books That Turn You from Novice to Expert

Well, occasionally we might wonder how any of this works but since this doesn’t have such a direct impact on our lives, we learn to live with the choices that the algorithms make for us. However, the issue of Interpretability becomes paramount when Machine learning models are used for tasks that directly affect the human population like the ones below:


or maybe this:


Here is another one:

Source

Therefore, it’s clear that when ML algorithms are used in the human context, it becomes all the more important to be able to explain their outcomes.


#Interpretability is the ability to explain in understandable terms to a human

Interpretable Machine Learning

Techniques for extracting insights from ML models

Interpretability means to extract human understandable insights from any Machine Learning model. Machine Learning models have been branded as ‘Black Boxes’ by many. This means that though we can get accurate predictions from them, we cannot clearly explain or identify the logic behind these predictions.

Through this article, I have put forth some of the techniques which help in extracting the insights from a model:

1.Permutation Importance

2. Partial Dependence Plots

3. SHAP Values

4. Advanced Uses of SHAP Values

5. LIME


#Interpretability is required to satisfy Ethical concerns and Cultivate Trust

Is your Machine Learning Model Biased?

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

An article on measuring model’s fairness and deciding on the best fairness metrics. Machine learning has proved its mettle in a lot of applications and areas. However, one of the key hurdles for industrial applications of machine learning models is to determine whether the raw input data used to train the model contains discriminatory bias or not. This is an important question and may have ethical and moral implications. However, there isn’t one single solution to this. For cases where the output of a model affects the people, it is wise to put fairness ahead of profits.


#Interpretability Beyond Feature Attribution

TCAV (Testing with Concept Activation Vectors)

Most Machine Learning models are designed to operate on low-level features like edges and lines in a picture or say the color of a single pixel. This is very different from the high-level concepts more familiar to humans like stripes in a zebra. Testing with Concept Activation Vectors (TCAV) is a new interpretability initiative from the Google AI Team. The Concept Activation Vectors (CAVs), provide an interpretation of a neural net’s internal state in terms of human-friendly concepts. TCAV uses directional derivatives to quantify the degree to which a user-defined concept is important to a classification result–for example, how sensitive a prediction of “zebra” is to the presence of stripes.


#Interpretibility for everyone

Using the ‘What-If Tool’ to investigate Machine Learning models.

Capabilities of What-If Tool

What-If Tool is an interactive visual tool that is designed to investigate the Machine Learning models. Abbreviated as WIT, it enables the understanding of a Classification or Regression model by enabling people to examine, evaluate, and compare machine learning models. Due to its user-friendly interface and less dependency on complex coding, everyone from a developer, a product manager, a researcher or a student can use it for their purpose.

The purpose of the tool is to give people a simple, intuitive, and a powerful way to play with a trained ML model on a set of data through a visual interface only, without the need to code.


#Interpretability before Model building

Visualizing Machine Learning Datasets with Google’s FACETS.

FACETS in action

EDA has an important role to play nowadays since insights obtained from Exploratory Data Analysis (EDA) are used in strategic business decision making. EDA helps to put forth the important aspects of the data and enables us to understand our data from different viewpoints. EDA allows users to find outliers, understand relationships between input variables, and identify potential data quality problems, even before the model building process.

Facets is an open-source visualization tool released by Google under the PAIR(People + AI Research) initiative. This tool helps us to understand and analyze the Machine Learning datasets. Facets consist of two visualizations — Facets Overview and Facets Dive, both of which help to drill down the data and provide great insights without much of work at user’s end.


Conclusion

Machine Learning is a very powerful Tool and is being increasingly used in multi-faceted ways. Therefore it becomes imperative that we use it responsibly. Today, Machine learning models are being increasingly used to make decisions that affect people’s lives. With this power comes a responsibility to ensure that the model predictions are fair and not discriminating. As in words of Sundar Pichai, ‘It’s not enough to know if a model works, we need to know how it works.’

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s