Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Explainable AI Development and Deployment

Save for later
  • 6 min read
  • 01 Nov 2023

article-image

Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!

Introduction

Generative AI is a subset of artificial intelligence that trains models to generate new data similar to some existing data. Examples - are image generation - creating realistic images that do not exist, Text generation - generating human-like text based on a given prompt, and music composition- creating new music compositions based on existing styles and genres.

LLM - Large Language models - are a type of AI model specialized in processing and generating human language - They are trained on vast amounts of text data, which makes them capable of understanding context, semantics, and language nuances. Example- GPT3 from OPENAI.
LLMs automate routine language processing tasks - freeing up human resources for more strategic work.

Black Box Dilemma

Complex ML models, like deep neural networks, are often termed as "black boxes" due to their opaque nature. While they can process vast amounts of data and provide accurate predictions, understanding how they arrived at a particular decision is challenging.

Transparency in ML models is crucial for building trust, verifying results, and ensuring that the model is working as intended. It's also necessary for debugging and improving models.

explainable-ai-development-and-deployment-img-0

Model Explainability Landscape

Model explainability refers to the degree to which a human can understand the decisions made by a machine learning model. It's about making the model’s decisions interpretable to humans, which is crucial for trust and actionable insights. There are two types of explainability models -

Intrinsic explainability refers to models that are naturally interpretable due to their simplicity and transparency. They provide insight into their decision-making process as part of their inherent design.

Examples: Decision Trees, Linear Regression, Logistic Regression.

Pros and Cons: Highlight that while they are easy to understand, they may lack the predictive power of more complex models.

Post-hoc explainability methods are applied after a model has been trained. They aim to explain the decisions of complex, black-box models by approximating their behavior or inspecting their structure.

Examples: LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlanations), and Integrated Gradients.

Pros and Cons: Post-hoc methods allow for the interpretation of complex models but the explanations provided may not always be perfect or may require additional computational resources.

SHAP (SHapley Additive exPlanations)
explainable-ai-development-and-deployment-img-1

Concept:

  • Main Idea: SHAP values provide a measure of the impact of each feature on the prediction for a particular instance.
  • Foundation: Based on Shapley values from cooperative game theory.

Working Mechanism:

Shapley Value Calculation:

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
  • For a given instance, consider all possible subsets of features.
  • For each subset, compare the model's prediction with and without a particular feature.
  • Average these differences across all subsets to compute the Shapley value for that feature.

SHAP Value Interpretation:

  • Positive SHAP values indicate a feature pushing the prediction higher, while negative values indicate the opposite.
  • The magnitude of the SHAP value indicates the strength of the effect.

LIME (Local Interpretable Model-agnostic Explanations)

explainable-ai-development-and-deployment-img-2

Concept:

  • Main Idea: LIME aims to explain the predictions of machine learning models by approximating the model locally around the prediction point.
  • Model-Agnostic: It can be used with any machine learning model.

Working Mechanism:

  • Selection of Data Point: Select a data point that you want to explain.
  • Perturbation: Create a dataset of perturbed instances by randomly changing the values of features of the original data point.
  • Model Prediction: Obtain predictions for these perturbed instances using the original model.
  • Weight Assignment: Assign weights to the perturbed instances based on their proximity to the original data point.
  • Local Model Training: Train a simpler, interpretable model (like linear regression or decision tree) on the perturbed dataset, using the weights from step 4.
  • Explanation Extraction: Extract explanations from the simpler model, which now serves as a local surrogate of the original complex model.

Hands-on example

In the below code snippet, we are using a popular churn prediction dataset to create a Random Forest Model.

# Part 1 - Data Preprocessing
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Churn_Modelling.csv')
X = dataset.iloc[:, 3:13]
y = dataset.iloc[:, 13]
 
dataset.head()
explainable-ai-development-and-deployment-img-3
#Create dummy variables
geography=pd.get_dummies(X["Geography"],drop_first=True)
gender=pd.get_dummies(X['Gender'],drop_first=True)
## Concatenate the Data Frames
X=pd.concat([X,geography,gender],axis=1)
## Drop Unnecessary columns
X=X.drop(['Geography','Gender'],axis=1)

Now, we save the model pickle file and use the lime and shap libraries for explainability.

import pickle
pickle.dump(classifier, open("classifier.pkl", 'wb'))
pip install lime
pip install shap
import lime
from lime import lime_tabular
 
interpretor = lime_tabular.LimeTabularExplainer(
training_data=np.array(X_train),
feature_names=X_train.columns,
mode='classification')

Lime has a Lime Tabular module to set the explainability module for tabular data. We pass in the training dataset, and the mode of the model as classification here.

exp = interpretor.explain_instance(
data_row=X_test.iloc[5], ##new data
predict_fn=classifier.predict_proba)
exp.show_in_notebook(show_table=True)
explainable-ai-development-and-deployment-img-4

 

We can see from the above chart, that Lime is able to explain one particular prediction from X_test in detail. The prediction here is 1 - (Churn is True), and the features that are contributing positively are represented in orange, and negatively are shown in blue.

import shap
shap.initjs()
explainer = shap.Explainer(classifier)
shap_values = explainer.shap_values(X_test)
shap.summary_plot(shap_values, X_test)

In the above code snippet, we have created a plot for explainability using the shap library. The Shap library here gives a global explanation for the entire test dataset, compared to LIME which focuses on local interpretation.

From the below graph, we can see which features contribute to how much for each of the churn classes.

explainable-ai-development-and-deployment-img-5

Conclusion

Explainability in AI enables trust in AI systems, and enables us to dive deeper in understanding the reasoning behind the models, and make appropriate updates to models in case there are any biases. In this article, we used libraries such as SHAP and LIME that make explainability easier to design and implement.

Author Bio

Swagata Ashwani serves as a Principal Data Scientist at Boomi, where she leads the charge in deploying cutting-edge AI solutions, with a particular emphasis on Natural Language Processing (NLP). With a stellar track record in AI research, she is always on the lookout for the next state-of-the-art tool or technique to revolutionize the industry. Beyond her technical expertise, Swagata is a fervent advocate for women in tech. She believes in giving back to the community, regularly contributing to open-source initiatives that drive the democratization of technology.

Swagata's passion isn't limited to the world of AI; she is a nature enthusiast, often wandering beaches and indulging in the serenity they offer. With a cup of coffee in hand, she finds joy in the rhythm of dance and the tranquility of the great outdoors.