Exploring explainability

Global, local, and cohort explanations

So far in the tutorial, we’ve been conducting error analysis mainly by identifying slices of the data over which our model does not perform as well as we desired. However, so far we haven’t asked a fundamental question, which is why does our model behave the way it does?

To understand some of the driving forces behind our model’s predictions, we will make use of explainability techniques.

In broad strokes, explainability techniques give us justifications for our model’s predictions. These explanations can be local, cohort, or global and each one provides a distinct perspective to practitioners and businesses. Let’s explore these three layers of explainability for our churn classification model, following a bottom-up approach.

Local explanations

Local explanations provide insights into individual model predictions.

Using our churn classifier, local explanations help us answer the question of why did our model predict that a specific user would churn?

To have a look at local explanations, click on any row of the data shown below the Error analysis panel. With Unbox, you have access to local explanations for all of the model’s predictions, powered by LIME and SHAP, two of the most popular model-agnostic explainability techniques.

Let’s now understand what we see.

Each feature receives a score. Values shown in shades of green (which are not present in this data sample) indicate the features that contributed to the model’s prediction toward the correct direction. Values shown in shades of red indicate the features that contributed negatively to the model’s prediction, pushing it in the wrong direction. Therefore, it is important to remember that these values are always relative to the true label.

In this specific example, the true label is Exited, but our model predicted it was a Retained user. What we can see from the explainability scores is that the Age was the feature that strongly contributed to our model’s mistake in this case. Maybe our dataset contains many more samples from young users that were retained and that’s why our model is predicting this particular sample as Retained as well.

At the end of the day, the model’s prediction is a balance between features that push it in the right direction and features that nudge it in the wrong direction.

In the previous image, we see the feature scores calculated using SHAP. If you’d like to see the scores computed by LIME, you can just click on Show LIME values to toggle between the two.

Error analysis needs to be a scientific process, incorporating hypothesizing and experimenting to its core. That’s one of the roles of the what-if analysis.

To conduct a what-if analysis with local explanations, we can simply modify some of the feature values right below the Comparison run and click on What-if, at the bottom of the page. For example, what would our model do if the user age was 90 instead of 22, all other features being equal?

Now we can directly compare the two explanations. Notice that with an elevated age, such as 90, it becomes clear that this is a user that will churn, the correct label. That’s why the age is now shown in green. After all, likely, there aren’t many 90-year-olds constantly using our platform.

👍

Comparing explanations

Feel free to explore some local explanations. Can you use the what-if analysis and play with the feature values to flip our model’s prediction in other rows?

📘

Actionable insights:

  • Help practitioners get to the root cause of problematic predictions their models are making;
  • Build confidence that the model is taking into consideration reasonable data to make its predictions and not simply over-indexing to certain features.

Cohort explanations

Now we move one layer up, to cohort explanations.

Cohort explanations are built by aggregating local explanations and help us understand which features contributed the most to the (mis)predictions made by the model over a data cohort.

For example, for the users aged between 25 and 35, what were the features that contributed the most to our model’s mispredictions? What about our model’s correct predictions?

These kinds of questions can be easily answered with Unbox. The answers are going to be shown in the Feature importance tab of the Error analysis panel. However, we need to first filter the data cohort that we are interested in explaining.

👍

Identifying the most mispredictive features

Filter the data to show only rows for users aged between 25 and 35. Then, head to the Feature importance tab on the Error analysis panel to look at the most predictive and mispredictive features. Hint: remember we created a tag for this exact query? Can filter data using a tag?

When you filter data cohorts, what we see in the Feature importance tab are the most predictive and most mispredictive features for that specific data cohort.

👍

Mispredictive feature ranges

Click on one of the blocks shown to see what happens. Did you notice what happened to the data shown below the Error analysis panel?

When we click on one of the blocks shown, two things happen. First, the data shown below the Error analysis panel displays only the rows that fall within that category. Second, the Error analysis panel is split in two, so that we can dive even further into understanding our model’s predictions. When we click on Age, for example, which was pointed out as the most mispredictive feature for that data cohort, we can see on the right-hand part of the error analysis panel the clusters of age values for which the age was the most mispredictive feature.

Again, if you click in one of the shown clusters, the slice of data shown below the Error analysis panel is filtered to show only rows that satisfy this criterion. Therefore, you can tag them to document error patterns and later create tests or generate synthetic data, download the rows, among other possibilities.

The same analysis can be done with the most predictive features, which are the features that contributed the most in the correct direction to the model’s predictions.

👍

Identifying most mispredictive features for an error class

First, clear all the filters in the filter bar. Now, can you have a look at the most mispredictive features for the samples our model predicted Retained but for which the label was Exited? Hint: you can filter the different error classes in the Data distribution tab of the Error analysis panel.

📘

Actionable insights

  • Practitioners can identify multiple ways to improve their model’s performance. For example, they can identify underrepresented feature value ranges on the dataset, which might be leading to model mistakes, or model over-indexing to certain features.

Global explanations

Global explanations help reveal which features contributed the most to the (mis)predictions made by the model over a dataset.

To look at the global explanations, you need to clear all the filters from the filter bar and go to the Feature importance panel. Since no data cohort is selected, what is shown there are the most predictive and mispredictive features for our model across the whole dataset.

For example, let’s have a look at the most predictive features for our churn classifier across the whole dataset.

We notice that Age seems to be the most predictive feature for our model. Furthermore, we see that certain age clusters seem to be particularly predictive.

Note that this information can be directly translated into business insights. A marketing team, for instance, might decide to create specific campaigns targeting the users from a certain user group to make sure they are retained.

As usual, clicking on the blocks in the Error analysis panel filters the data slice shown at the bottom. Thus, you can easily tag the displayed rows to document patterns and ensure reproducibility.

👍

Tying it all together

What’s the precision and F1 for the data cohort that has CreditScore within the most predictive range? Hint: you will need to create a tags.


Did this page help you?